An incremental ensemble classifier learning by means of a rule-based accuracy and diversity comparison
conference contribution
posted on 2018-05-18, 00:00authored byMd Asafuddoula, Brijesh Verma, M Zhang
In this paper, we propose an incremental ensemble classifier learning method. In the proposed method, a set of accurate and diverse classifiers are generated and added to the ensemble by means of accuracy and diversity comparison. The selection of classifiers in ensemble starts with a layer (where data is partitioned into any given number of clusters and fed to a set of base classifiers) and then continues to improve the bias-variance (i.e., accuracy and diversity). Optimal ensemble classifier selection is done through accuracy-precedence-diversity comparison, i.e., a model with better accuracy is preferred but in the case of models with the same accuracy, better diversity is preferred. The comparison is made on the class decomposed accuracies (i.e., all class accuracies are decomposed to a scalar value). A non-identical set of base classifiers is trained on the clusters of data in a layer and the center of the cluster is recorded as an identifier to the corresponding base classifiers set. Decisions from multiple base classifiers are fused to an ensemble class output using majority voting for each pattern and finally the decisions across multiple layers are combined using majority voting. The proposed method is evaluated on UCI benchmark datasets and compared with the recently proposed ensemble classifiers including the Bagging and Boosting. Through comparison, we demonstrate that the proposed method improves the performance of the base classifiers and performs better than the existing ensemble methods.
Funding
Category 1 - Australian Competitive Grants (this includes ARC, NHMRC)