Classifier Ensembles with the Extended Space Forest

AMASYALI M. F., Ersoy O. K.

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, vol.26, no.3, pp.549-562, 2014 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 26 Issue: 3
  • Publication Date: 2014
  • Doi Number: 10.1109/tkde.2013.9
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.549-562
  • Keywords: Classifier ensembles, committees of learners, consensus theory, ensemble algorithms, mixtures of experts, multiple classifier systems, extended spaces, bagging, random forest, random subspace, rotation forest, decision trees, supervised learning, ROTATION FOREST, DIVERSITY
  • Yıldız Technical University Affiliated: Yes


The extended space forest is a new method for decision tree construction in which training is done with input vectors including all the original features and their random combinations. The combinations are generated with a difference operator applied to random pairs of original features. The experimental results show that extended space versions of ensemble algorithms have better performance than the original ensemble algorithms. To investigate the success dynamics of the extended space forest, the individual accuracy and diversity creation powers of ensemble algorithms are compared. The Extended Space Forest creates more diversity when it uses all the input features than Bagging and Rotation Forest. It also results in more individual accuracy when it uses random selection of the features than Random Subspace and Random Forest methods. It needs more training time because of using more features than the original algorithms. But its testing time is lower than the others because it generates less complex base learners.