An Optimized Feature Regularization in Boosted Decision Tree
Ravichandran1, Krishna Mohanta2, C.Nalini3

1Dr. Krishna Mohanta, Associate Professor, Department of Computer Science and Engineering, Kakatiya Institute of Technology and Science, India.

2Krishna Mohanta,  Associate Professor, Department of Computer Science and Engineering, Kakatiya Institute of Technology and Science, India.

3Dr. C. Nalini, Professor, Department of Computer Science and Engineering, Kakatiya Institute of Technology and Science, India.

Manuscript received on 10 April 2019 | Revised Manuscript received on 17 April 2019 | Manuscript Published on 26 July 2019 | PP: 986-989 | Volume-8 Issue-6S4 April 2019 | Retrieval Number: F12020486S419/19©BEIESP | DOI: 10.35940/ijitee.F1202.0486S419

Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open-access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: We put forward a tree regularization, which empowers numerous tree models to do feature collection effectively. The type thought of the regularization system be to punish choosing another feature intended for split when its gain is like the features utilized in past splits. This paper utilized standard data set as unique discrete test data, and the entropy and information gain of each trait of the data was determined to actualize the classification of data. Boosted DT are between the most prominent learning systems being used nowadays. Likewise, this paper accomplished an optimized structure of the decision tree, which is streamlined for improving the efficiency of the algorithm on the reason of guaranteeing low error rate which was at a similar dimension as other classification algorithms.

Keywords: Decision Tree, Boosting, Regularization, Feature Optimization.
Scope of the Article: Computer Science and Its Applications