
Sparse learning with CART
Decision trees with binary splits are popularly constructed using Classi...
read it

Variable importance in binary regression trees and forests
We characterize and study variable importance (VIMP) and pairwise variab...
read it

Convex Polytope Trees
A decision tree is commonly restricted to use a single hyperplane to spl...
read it

Measure Inducing Classification and Regression Trees for Functional Data
We propose a treebased algorithm for classification and regression prob...
read it

A better method to enforce monotonic constraints in regression and classification trees
In this report we present two new ways of enforcing monotone constraints...
read it

The Effect of Heteroscedasticity on Regression Trees
Regression trees are becoming increasingly popular as omnibus predicting...
read it

Nonparametric Variable Screening with Optimal Decision Stumps
Decision trees and their ensembles are endowed with a rich set of diagno...
read it
Best Split Nodes for Regression Trees
Decision trees with binary splits are popularly constructed using Classification and Regression Trees (CART) methodology. For regression models, at each node of the tree, the data is divided into two daughter nodes according to a split point that maximizes the reduction in variance (impurity) along a particular variable. This paper develops bounds on the size of a terminal node formed from a sequence of optimal splits via the infinite sample CART sum of squares criterion. We use these bounds to derive an interesting connection between the bias of a regression tree and the mean decrease in impurity (MDI) measure of variable importancea tool widely used for model interpretabilitydefined as the weighted sum of impurity reductions over all nonterminal nodes in the tree. In particular, we show that the size of a terminal subnode for a variable is small when the MDI for that variable is large. Finally, we apply these bounds to show consistency of Breiman's random forests over a class of regression functions. The context is surprisingly general and applies to a wide variety of multivariable data generating distributions and regression functions. The main technical tool is an exact characterization of the conditional probabilities of the daughter nodes arising from an optimal split, in terms of the partial dependence function and reduction in impurity.
READ FULL TEXT
Comments
There are no comments yet.