Example: air traffic controller

Greedy Function Approximation: A Gradient Boosting …

February24,1999 AbstractFunctionapproximationisviewedfro mtheperspectiveofnumericaloptimizationin functionspace, { {descent\ Boosting "paradigmisdevelopedfor additiveexpansionsbasedonany cal-gorithmsarepresentedforleast{squares ,least{absolute{deviation,andHuber{Mloss func-tionsforregression,andmulti{classlo gisticlikelihoodforclassi ,andtoolsforinterpretingsuch\TreeBoost" ,highlyrobust,interpretableproceduresfor regressionandclassi cation, ,andFriedman,Hastie, \output"or\re-sponse"variableyandasetofr andom\input"or\explanatory"variablesx=fx 1; ; \training"samplefyi;xigN1ofknown(y;x){va lues,thegoalisto ndafunctionF (x)thatmapsxtoy,suchthatoverthejointdist ributionofall(y;x){values,theexpectedval ueofsomespeci edlossfunction (y;F(x))isminimizedF (x)=argminF(x)Ey;x (y;F(x))=argminF(x)Ex[Ey( (y;F(x))jx]:(1)Frequentlyemployedlossfun ctio)}}}}}}}}}

O CMIS, Lo c k ed Bag 17, North Ryde NSW 1670; jhf@stat.stanford.edu 1. 1987), MARS (F riedman 1991), w a v elets (Donoho 1993), and supp ort v ector mac hines (V apnik 1995). Of sp ecial in terest here is the case where these functions are c haracterized b y small decision trees, suc h as those pro duced b y CAR T TM (Breiman, F riedman ...

Tags:

  Boosting, Approximation, Icms, Derating, A gradient boosting

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Greedy Function Approximation: A Gradient Boosting …

1 February24,1999 AbstractFunctionapproximationisviewedfro mtheperspectiveofnumericaloptimizationin functionspace, { {descent\ Boosting "paradigmisdevelopedfor additiveexpansionsbasedonany cal-gorithmsarepresentedforleast{squares ,least{absolute{deviation,andHuber{Mloss func-tionsforregression,andmulti{classlo gisticlikelihoodforclassi ,andtoolsforinterpretingsuch\TreeBoost" ,highlyrobust,interpretableproceduresfor regressionandclassi cation, ,andFriedman,Hastie, \output"or\re-sponse"variableyandasetofr andom\input"or\explanatory"variablesx=fx 1; ; \training"samplefyi;xigN1ofknown(y;x){va lues,thegoalisto ndafunctionF (x)thatmapsxtoy,suchthatoverthejointdist ributionofall(y;x){values,theexpectedval ueofsomespeci edlossfunction (y;F(x))isminimizedF (x)=argminF(x)Ey;x (y;F(x))=argminF(x)Ex[Ey( (y;F(x))jx]:(1)Frequentlyemployedlossfun ctions (y;F)includesquared{error(y F)2andabsoluteerrorjy Fjfory2R1(regression),andnegativebinomia llog{likelihood,log(1+e 2yF),wheny2f 1;1g(classi cation).)}}}}}}}}}}}

2 AcommonprocedureistotakeF(x)tobeamembero faparameterizedclassoffunctionsF(x;P),wh ereP=fP1;P2; \additive"expansionsoftheformF(x;P)=MXm= 0 mh(x;am);(2)whereP=f m; (x;a)in(2)areusuallychosentobesimplefunc tionsofxwithparametersa=fa1;a2; (2)areattheheartofmanyfunctionapproximat ionmethodssuchasneuralnetworks(Rumelhart ,Hinton,andWilliams1986),radialbasisfunc tions(Powell =argminP (P)=argminPEy;x (y;F(x;P));F (x)=F(x;P ):(3)FormostF(x;P)and ,numericaloptimizationmethodsmustbeappli edtosolve(3).Thisinvolvesexpressingtheso lutionfortheparametersintheformP =MXm=0pm(4)wherep0isaninitialguessandfpm gM1aresuccessiveincrements(\steps"or\boo sts")de {descentSteepest{ nestheincrementsfpmgM1(4) @ (P)@Pj P=Pm 1wherePm 1=m 1Xi=0pi:Thestepistakentobepm= mgmwhere m=argmin (Pm 1 gm):(5)Thenegativegradient gmissaidtode nethe\steepest{descent"directionandthela ststep(5)iscalledthe\linesearch" \nonparametric" (x)evaluatedateachpointxtobea\parameter" andseektominimize (F(x))=Ey;x (y;F(x))=Ex[Ey( (y;F(x))jx];orequivalently (F(x))=[Ey( (y.))))]}}}

3 F(x))jx]2ateachindividualx,directlywithr especttoF(x).Infunctionspacethereareanin nitenumberofsuchparameters,butindatasets (consideredbelow)therearea nitenumberfF(xi) (x)=MXm=0fm(x)wheref0(x)isaninitialguess ,andffm(x)gM1areincrementalfunctions(\st eps"or\boosts")de {descentfm(x)= mgm(x)(6)withgm(x)= @Ey[ (y;F(x))jx]@F(x) F(x)=Fm 1(x)andFm 1(x)=m 1Xi=0fi(x):Assumingsu cientregularitythatonecaninterchangedi erentiationandintegration,thisbe-comesgm (x)=Ey @ (y;F(x))@F(x)jx F(x)=Fm 1(x):(7)Themultiplier min(6)isgivenbythelinesearch m=argmin Ey;x (y;Fm 1(x) gm(x)):(8)3 FinitedataTheaboveapproachbreaksdownwhen thejointdistributionof(y;x)isrepresented bya nitedatasamplefyi; [ jx]cannotbeevaluatedaccuratelyateachxi,a ndevenifitcould,onewouldliketoestimateF (x) (2) \ Greedy {stagewise" ;2; ;M( m;am)=argmin ;aNXi=1 (yi;Fm 1(xi)+ h(xi;a))(9)andthenFm(x)=Fm 1(x)+ mh(x;am):(10)Insignalprocessingthisstrat egyiscalled\matchingpursuit"(MallatandZh ang1993)where (y;F)issquared{errorlossandthefh(x;am)gM 1arecalledbasisfunctions,usuallytakenfro manover{completewavelet{ ,(9)(10)iscalled\ Boosting "wherey2f 1;1gand (y.}}}}}

4 F)iseitheranexponentiallosscriterione yF(FreundandSchapire1996,SchapireandSing er1998)ornegativebinomiallog{likelihood( Friedman,Hastie,and3 Tibshirani1998{FHT98).Thefunctionh(x;a)i scalleda\weaklearner"or\baselearner", (y;F)and/orbaselearnerh(x;a)thesolutiont o(9)isdi 1(x)atthemthiteration,thefunction mh(x;am)(9)(10)isthebestgreedysteptoward stheminimizingsolutionF (x)(1),undertheconstraintthatthestep\dir ection"h(x;am)beamemberoftheparameterize dclassoffunctionsh(x;a).Itcanthuscanbevi ewedasasteepest{descentstep(6) ,theunconstrainednegativegradient(7)give sthebeststeepest{descentstepdirectionatF m 1(x).}}}}

5 Onepossibilityistochoosethatmemberofthep arameterizedclassh(x;a)thatismostparalle lintheN{dimensionaldataspacewiththeuncon strainednegativegradientf gm(xi) (x;a)mosthighlycorrelatedwith gm(x) ; NXi=1[ gm(xi) h(xi;a)]2:(11)Thisconstrainednegativegra dienth(x;am)isusedinplaceoftheunconstrai nedone gm(x)(7)inthesteepest{ cally,thelinesearch(8)isperformed m=argmin NXi=1 (yi;Fm 1(xi)+ h(xi;am))andtheapproximationupdatedFm(x) =Fm 1(x)+ mh(x;am):Basically,insteadofobtainingthe solutionunderasmoothnessconstraint(9),th econstraintisappliedtotheunconstrained(r ough)solutionf gm(xi)gNi=1(7).}}

6 Thispermitsthereplacementofthedi cultminimizationproblem(9)byleast{square sminimization(11).Thus,foranyh(x;a)forwh ichaleast{squaresalgorithmexists,onecanu sethisapproachtominimizeanyloss (y;F) (generic)algorithmusingsteepest{ :GradientBoost1F0(x)=argmin PNi=1 (yi; )2 Form=1toMdo:3~yi= h@ (yi;F(xi))@F(xi)iF(x)=Fm 1(x);i=1;N4am=argmina; PNi=1[~yi h(xi;a)]25 m=argmin PNi=1 (yi;Fm 1(xi)+ h(xi;am))6Fm(x)=Fm 1(x)+ mh(x;am)7endForendAlgorithmInthespecialc asewherey2f 1;1gandthelossfunction (y;F)dependsonyandFonlythroughtheirprodu ct (y;F)= (yF),theanalogyofboosting(9)(10)tosteepe st{descentminimizationhasbeennotedinthem achinelearningliterature(Breiman1997a,Ra tsch,Onoda,andMuller1998).}}}}

7 Du \margin"andthesteepest{descentisperforme dinthespaceofmarginvalues, erentstrategyofcastingregressionintothef rameworkofclassi cationinthecontextoftheAdaBoostalgorithm (FreundandSchapire1996).44 Applications:additivemodelingInthissecti onthegradientboostingstrategyisappliedto severalpopularlosscriteria:least{squares (LS),least{absolute{deviation(LAD),Huber (M),andlogisticbinomiallog{likelihood(L) .The rstservesasa\realitycheck", {squaresregressionHere (y;F)=(y F)2= \pseudo{response"inline3ofAlgorithm1is~y i=yi Fm 1(xi).Thus,line4simply tsthecurrentresidualsandthelinesearch(li ne5)producestheresult m= m,where mistheminimizing ,gradientboostingonsquared{errorlossprod ucestheusualstagewiseapproachofiterative ly ttingthecurrentresiduals:Algorithm2:LSBo ostF0(x)= yForm=1toMdo:~yi=yi Fm 1(xi);i=1;N( m;am)=argmina; PNi=1[~yi h(xi;a)]2Fm(x)=Fm 1(x)+ mh(x;am) (y;F)=jy Fj,onehas~yi= @ (yi;F(xi))@F(xi) F(x)=Fm 1(x)=sign(yi Fm 1(xi)):(12)Thisimpliesthath(x;a)is (line5)becomes m=argmin NXi=1jyi Fm 1(xi) h(xi;am)j=argmin NXi=1jh(xi;am)j yi Fm 1(xi)h(xi;am) =medianW yi Fm 1(xi)h(xi.)}}}}}}}}

8 Am) N1;wi=jh(xi;am)j:(13)HeremedianWf (12)(13)intoAlgorithm1yieldsanalgorithmf orleast-absolute-deviationboosting,based onanybaselearnerh(x;a).Inthespecialcasew herethebaselearnerisanL{ (5) {disjointsubsets(P1;P2; ;PL).Thegradientiscorrespondinglypartiti oned, (functionspace)optimizationproblemhere,t heparametersarethefunctionvaluesfF(xi)gN 1andadecisiontreepartitionsthemaccording toterminalnodemembershipofxi, \subset"correspondstoapplying(13)separat elytothedataineachterminalnodeofthedecis iontree lm=medianW yi Fm 1(xi)h(xi;am) xi2 Rlm ;wi=jh(xi;am)j;(14)5wherefRlmgL1arethesu bregionsofx{ ,thevaluesoffh(xi;am)jxi2 Rlmgareallequaltothesamevalueh(xi.}}}

9 Am)=hlm1(xi2 Rlm),sothat(14)reducesto lm=1hlmmedianfyi Fm 1(xi)jxi2 Rlmgandtheupdateonline6ofAlgorithm1becom essimplyFm(x)=Fm 1(x)+medianfyi Fm 1(xi)jxi2 Rlmg1(x2 Rlm):Ateachiterationm,adecisiontreeisbui lttobestpredictthesignofthecurrentresidu alsyi Fm 1(xi),basedonaleast{ :LADTreeBoostF0(x)=medianfyigN1 Form=1toMdo:~yi=sign(yi Fm 1(xi));i=1;NfRlmgL1=L{terminalnodetree(f ~yi;xigN1) lm=medianxi2 Rlmfyi Fm 1(xi)g;l=1;LFm(x)=Fm 1(x)+ lm1(x2 Rlm) ,andthepseudo{responses~yi(12)haveonlytw ovalues,~yi2f 1; (terminalnodevalues) (x)=argminL{nodetreeNXi=1jyi Fm 1(xi) tree(xi)jFm(x)=Fm 1(xi)+treem(x):However,Algorithm3ismuchf astersinceitusesleast{ { {RegressionM{regressiontechniquesattempt resistancetolong{tailederrordistribution sandoutlierswhilemaintaininghighe (Huber1964) (y;F)= 12(y F)2jy Fj (jy Fj =2)jy Fj> :(15)Herethepseudo{responseis~yi= @ (yi;F(xi))@F(xi) F(x)=Fm 1(x)= yi Fm 1(xi)jyi Fm 1(xi)j sign(yi Fm 1(xi))jyi Fm 1(xi)j> :andthelinesearchbecomes m=argmin NXi=1 (yi;Fm 1(xi)+ h(xi;am))(16)6with givenby(15).}}}}}}}}}}

10 Thesolutionto(15)(16)canbeobtainedbystan darditerativemethods(seeHuber1964).Theva lueofthetransitionpoint , mistakentobethe quantileofthedistributionoffjyi Fm 1(xi)jgN1,withthevalueof controllingthebreak{ \break{down"pointisthefractionofobservat ionsthatcanbearbitrarilymodi : lm=argmin Xxi2 Rlm (yi;Fm 1(xi)+ hlm)or lm= lmhlm=argmin Xxi2 Rlm (yi;Fm 1(xi)+ ))(17)andtheupdateisFm(x)=Fm 1(x)+ lm1(x2 Rlm):Thesolutionto(17)canbeapproximatedb yasinglestepofthestandarditerativeproced ure(Huber1964)startingatthemedian~rlm=me dianxi2 Rlmfrm 1(xi)g:wherefrm 1(xi)gN1arethecurrentresidualsrm 1(xi)=yi Fm 1(xi):Theapproximationis lm=~rlm+1 NlmXxi2 Rlmsign(rm 1()}}


Related search queries