Example: confidence

Using Random Forest to Learn Imbalanced Data

Using Random Forest to Learn Imbalanced DataChao of statistics ,UC BerkeleyAndy Research,Merck Research LabsLeo of statistics ,UC BerkeleyAbstractIn this paper we propose two ways to deal with the Imbalanced data classification problem usingrandom Forest . One is based on cost sensitive learning, and the other is based on a sampling metrics such as precision and recall, false positive rate and false negative rate,F-measureand weighted accuracy are computed. Both methods are shown to improve the prediction accuracy ofthe minority class, and have favorable performance compared to the existing IntroductionMany practical classification problems areimbalanced; , at least one of the classes constitutes only avery small minority of the data.

Department of Statistics,UC Berkeley Andy Liaw, andyliaw@merck.com Biometrics Research,Merck Research Labs Leo Breiman, leo@stat.berkeley.edu Department of Statistics,UC Berkeley Abstract In this paper we propose two ways to deal with the imbalanced data classification problem using random forest.

Tags:

  Forest, Statistics, Abstracts, Random, Random forests

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Using Random Forest to Learn Imbalanced Data

1 Using Random Forest to Learn Imbalanced DataChao of statistics ,UC BerkeleyAndy Research,Merck Research LabsLeo of statistics ,UC BerkeleyAbstractIn this paper we propose two ways to deal with the Imbalanced data classification problem usingrandom Forest . One is based on cost sensitive learning, and the other is based on a sampling metrics such as precision and recall, false positive rate and false negative rate,F-measureand weighted accuracy are computed. Both methods are shown to improve the prediction accuracy ofthe minority class, and have favorable performance compared to the existing IntroductionMany practical classification problems areimbalanced; , at least one of the classes constitutes only avery small minority of the data.

2 For such problems, the interest usually leans towards correct classificationof the rare class (which we will refer to as the positive class). Examples of such problems include frauddetection, network intrusion, rare disease diagnosing, etc. However, the most commonly used classificationalgorithms do not work well for such problems because they aim to minimize the overall error rate, ratherthan paying special attention to the positive class. Several researchers have tried to address the problemin many applications such as fraudulent telephone call detection (Fawcett & Provost, 1997), informationretrieval and filtering (Lewis & Catlett, 1994), diagnosis of rare thyroid deceases (Murphy & Aha, 1994)and detection of oil spills from satellite images (Kubat et al.)

3 , 1998).There are two common approaches to tackle the problem of extremely Imbalanced data. One is basedon cost sensitive learning: assigning a high cost to misclassification of the minority class, and trying tominimize the overall cost. Domingos (1999) and Pazzani et al. (1994) are among these. The other approachis to use a sampling technique: Either down-sampling the majority class or over-sampling the minority class,or both. Most research has been focused on this approach. Kubat et al. (1997) develop a system, SHRINK,for Imbalanced classification.

4 SHRINK labels a mixed region as positive (minority class) regardless ofwhether the positive examples prevail in the region or not. Then it searches for the best positive made comparisons to and 1-NN, and show that SHRINK has improvement in most cases. Kubat1& Matwin (1997) uses the one-sided sampling technique to selectively down sample the majority & Li (1998) over-sample the minority class by replicating the minority samples so that they attain thesame size as the majority class. Over-sampling does not increase information; however by replication itraises the weight of the minority samples.

5 Chawla et al. (2002) combine over-sampling and down-samplingto achieve better classification performance than simply down-sampling the majority class. Rather thanover-sampling with replacement, they create synthetic minority class examples to boost the minority class(SMOTE). They compared SMOTE plus the down-sampling technique with simple down-sampling, one-sided sampling and SHRINK, and showed favorable improvement. Chawla et al. (2003) apply the boostingprocedure to SMOTE to further improve the prediction performance on the minority class and the propose two ways to deal with the problem of extreme imbalance, both based on the Random Forest (RF) algorithm (Breiman, 2001).

6 One incorporates class weights into the RF classifier, thus making it costsensitive, and it penalizes misclassifying the minority class. The other combines the sampling techniqueand the ensemble idea. It down-samples the majority class and grows each tree on a more balanced dataset. A majority vote is taken as usual for prediction. We compared the prediction performance with one-sided sampling, SHRINK, SMOTE, and SMOTEB oost on the data sets that the authors of those techniquesstudied. We show that both of our methods have favorable prediction Random ForestRandom Forest (Breiman, 2001) is an ensemble of unpruned classification or regression trees, induced frombootstrap samples of the training data, Using Random feature selection in the tree induction process.

7 Predic-tion is made by aggregating (majority vote for classification or averaging for regression) the predictions ofthe ensemble. Random Forest generally exhibits a substantial performance improvement over the single treeclassifier such as CART and It yields generalization error rate that compares favorably to Adaboost,yet is more robust to noise. However, similar to most classifiers, RF can also suffer from the curse of Learn -ing from an extremely Imbalanced training data set. As it is constructed to minimize the overall error rate, itwill tend to focus more on the prediction accuracy of the majority class, which often results in poor accuracyfor the minority class.

8 To alleviate the problem, we propose two solutions: balanced Random Forest (BRF)and weighted Random Forest (WRF). Balanced Random ForestAs proposed in Breiman (2001), Random Forest induces each constituent tree from a bootstrap sample of thetraining data. In learning extremely Imbalanced data, there is a significant probability that a bootstrap samplecontains few or even none of the minority class, resulting in a tree with poor performance for predictingthe minority class. A na ve way of fixing this problem is to use a stratified bootstrap; , sample with2replacement from within each class.

9 This still does not solve the imbalance problem entirely. As recentresearch shows ( , Ling & Li (1998),Kubat & Matwin (1997),Drummond & Holte (2003)), for the treeclassifier, artificially making class priors equal either by down-sampling the majority class or over-samplingthe minority class is usually more effective with respect to a given performance measurement, and that down-sampling seems to have an edge over over-sampling. However, down-sampling the majority class may resultin loss of information, as a large part of the majority class is not used.

10 Random Forest inspired us to ensembletrees induced from balanced down-sampled data. The Balanced Random Forest (BRF) algorithm is each iteration in Random Forest , draw a bootstrap sample from the minority class. Randomly drawthe same number of cases, with replacement, from the majority a classification tree from the data to maximum size, without pruning. The tree is induced withthe CART algorithm, with the following modification: At each node, instead of searching through allvariables for the optimal split, only search through a set ofmtryrandomly selected the two steps above for the number of times desired.


Related search queries