Ensembles of classiﬁcation and regression trees remain popular machine learning methods because they deﬁne ﬂexible nonparametric models that predict well and are computationally eﬃcient both during training and testing. During induction of decision trees one aims to ﬁnd predicates that are maximally informative about the prediction target. To select good predicates most approaches estimate an information theoretic scoring function, the information gain, both for classiﬁcation and regression problems. We point out that the common estimation procedures are biased and show that by replacing them with improved estimators of the discrete and the diﬀerential entropy we can obtain better decision trees. In eﬀect our modiﬁcations yield improved predictive performance and are simple to implement in any decision tree code.