Example: stock market

Lagrangian Methods for Constrained Optimization

Appendix ALagrangian Methods forConstrained regional and functional constraintsThroughout this book we have considered Optimization problems that were subject to con-straints. These include the problem of allocating a finite amounts of bandwidth to maximizetotal user benefit (page 17), the social welfare maximization problem (page 129) and thetime of day pricing problem (page 213). We make frequent use of the Lagrangian method tosolve these problems. This appendix provides a tutorial on the method. Take, for example,NETWORK: maximizex 0nr r=1wrlogxr,subject toAx C,posed on page 271. This is an example of the generic Constrained Optimization problem:P: maximizex Xf(x),subject tog(x)= to be maximized subject to constraints that are of two types.

is a regional constraint. For example, it might be x ≥ 0. The constraint g(x)=b is a functional constraint. Sometimes the functional constraint is an inequality constraint, like g(x) ≤ b. But if it is, we can always add a slack variable, z, and re-write it as the equality constraint g(x)+z = b, re-defining the regional constraint as x ∈ ...

Tags:

  Regional

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Lagrangian Methods for Constrained Optimization

1 Appendix ALagrangian Methods forConstrained regional and functional constraintsThroughout this book we have considered Optimization problems that were subject to con-straints. These include the problem of allocating a finite amounts of bandwidth to maximizetotal user benefit (page 17), the social welfare maximization problem (page 129) and thetime of day pricing problem (page 213). We make frequent use of the Lagrangian method tosolve these problems. This appendix provides a tutorial on the method. Take, for example,NETWORK: maximizex 0nr r=1wrlogxr,subject toAx C,posed on page 271. This is an example of the generic Constrained Optimization problem:P: maximizex Xf(x),subject tog(x)= to be maximized subject to constraints that are of two types.

2 The constraintx Xis aregional constraint. For example, it might bex 0. The constraintg(x)=bis afunctional constraint. Sometimes the functional constraint is an inequality constraint,likeg(x) b. But if it is, we can always add aslack variable,z, and re-write it as theequality constraintg(x)+z=b, re-defining the regional constraint asx Xandz illustrate things we shall use theNETWORK problem with just one resource constraintP1: maximizex 0n i=1wilogxi,subject ton i=1xi=b,wherebis a positive The Lagrangian methodThe solution of a Constrained Optimization problem can often be found by using the so-calledLagrangian method. We define theLagrangianasL(x, )=f(x)+ (b g(x)). The Lagrangian method332 ForP1it isL1(x, )=n i=1wilogxi+ b n i=1xi .In general, the Lagrangian is the sum of the original objective function and a term thatinvolves the functional constraint and a Lagrange multiplier.

3 Suppose we ignore thefunctional constraint and consider the problem of maximizing the Lagrangian , subject onlyto the regional constraint. This is often an easier problem than the original one. The valueofxthat maximizesL(x, ) depends on the value of . Let us denote this optimizing valueofxbyx( ).For example, sinceL1(x, ) is a concave function ofxit has a unique maximum at apoint wherefis stationary with respect to changes inx, , where L1/ xi=wi/xi =0 ( )=wi/ . Note thatxi( )>0for >0, and so the solution lies in the interiorof the feasible of as knob that we can turn to adjust the value ofx. Imagine turning thisknob until we find a value of ,say = , such that the functional constraint is satisfied, ,g(x( )) = =x( ). Our claim is thatx solvesP.

4 This is the so-calledLagrangian Sufficiency Theorem, which we state and prove shortly. First note that, in ourexample,g(x( )) = iwi/ . Thus choosing = iwi/b,wehaveg(x( )) =b. Thenext theorem shows thatx=x( )=wib/ jwjis optimal 5 ( Lagrangian Sufficiency Theorem)Suppose there existx Xand ,such thatx maximizesL(x, )over allx X,andg(x )=b. Thenx Xg(x)=bf(x)= maxx Xg(x)=b[f(x)+ (b g(x))] maxx X[f(x)+ (b g(x))]=f(x )+ (b g(x ))]=f(x )Equality in the first line holds because we have simply added 0 on the right hand side. Theinequality in the second line holds because we have enlarged the set over which maximizationtakes place. In the third line we use the fact thatx maximizesL(x, ) and in the fourthline we useg(x )= is feasible forP, in that it satisfies the regional and functionalconstraints.

5 Hencex is constraintsIfgandbare vectors, so thatg(x)=bexpresses more than one constraint, then we wouldwriteL(x, )=f(x)+ (b g(x)),where the vector now has one component for each constraint. For example, the LagrangianforNETWORKisL(x, )=nr r=1wrlogxr+ j j(Cj jAjrxr zj)wherezjis the slack variable for thejth When does the method work? When does the method work?The Lagrangian method is based on a sufficiency theorem . The means that method canwork, but need not work. Our approach is to write down the Lagrangian , maximize it, andthen see if we can choose and a maximizingxso that the conditions of the LagrangianSufficiency Theorem are satisfied. If this works, then we are happy. If it does not work,then too bad. We must try to solve our problem some other way.

6 The method workedforP1because we could find an appropriate . To see that this is so, note that as increases from 0 to ,g(x( )) decreases from to 0. Moreover,g(x( )) is continuous in . Therefore, given positiveb, there must exist a for whichg(x( )) =b. For this valueof , which we denote , and forx =x( ) the conditions of the Lagrangian SufficiencyTheorem are see that the Lagrangian method does not always work, consider the problemP2: minimize x,subject tox 0and x= cannot be solved by the Lagrangian method. If we minimizeL(x, )= x+ (2 x)overx 0, we get a minimum value of , no matter what we take as the value of .This is clearly not the right answer. So the Lagrangian method fails to work. However, themethod does work forP 2: minimize x,subject tox 0andx2=16 NowL(x, )= x+ (16 x2)If we take = 1/8 then L/ x= 1+x/4andsox = 4.

7 Note thatP2andP 2arereally the same problem, except that the functional constraint is expressed differently. Thuswhether or not the Lagrangian method will work can depend upon how we formulate can say something more about when the Lagrangian method will work. LetP(b)be the problem:minimizef(x), such thatx Xandg(x)= (b)asminf(x),subject tox Xandg(x)=b. Then the Lagrangian method works forP(b )ifandonlyif there is a line that is tangent to (b)atb and lies completely below (b). This happensif (b) is a convex function ofb, but this is a difficult condition to check. A set of sufficientconditions that are easier to check are provided in the following theorem. These conditionsdo not hold inP2asg(x)= xis not a convex function 2the sufficient conditionsare 6 Iffandgare convex functions,Xis a convex set, andx is an optimalsolution toP, then there exist Lagrange multipliers Rmsuch thatL(x , ) L(x, )for allx thatfis aconvex functionif for allx1,x2 Xand [0,1], we havef( x1+(1 )x2) f(x1)+(1 )f(x2).

8 Xis aconvex setif for allx1,x2 Xand [0,1], we have x1+(1 )x2 X. Furthermore,fis aconcave functionif proof of the theorem proceeds by showing that (b) is convex. This implies thatfor eachb there is a tangent hyperplane to (b)atb with the graph of (b) lying Shadow prices334above it. This uses the so-called supporting hyperplane theorem for convex sets, which isgeometrically obvious, but some work to prove. Note the equation of the hyperplane will bey= (b )+ (b b ) for some multipliers . This can be shown to be the required vectorof Lagrange multipliers and the picture below gives some geometric intuition as to why theLagrange multipliers exist and why these s give the rate of change of the optimum (b) L=f (g b )fgb (b) (b )feasible (g, f) (b )+ (g b ) (g b ) Shadow pricesThe maximizingx, the appropriate value of and maximal value offall depend happens ifbchanges by a small amount?

9 Let the maximizingxbex (b). Supposethe Lagrangian method works and let (b) denote the appropriate value of .Asabove,let (b) denote the maximal value (b)=f(x (b)) + [b g(x (b)].So simply differentiating with respect tob,wehave b (b)=n i=1 x i b x i{f(x (b)) + [b g(x (b)]}+ b{f(x (b)) + [b g( x(b)]}+ b {f(x (b)) + [b g(x (b)]}=0+ +[b g(x (b)] b,where the first term on the right hand side is 0 becauseL(x, ) is stationary with respecttoxiatx=x and the third term is zero becauseb g(x (b)) = 0. Thus b (b)= and can be interpreted as the rate at which the maximized value offincreases withb,for small increases aroundb. For this reason, the Lagrange multiplier is also called The dual problem335shadow price, the idea being that ifbincreases tob+ then we should be prepared topay for the increase we receive can happen that at the optimum none, one, or several constraints are active.)))))

10 ,with constraintsg1(x) b1andg2(x) b2it can happen that at the optimumg1(x )=b1andg2(x )<b2. In this case we will find have 2= 0. This makes sense. The secondconstraint is not limiting the maximization offand so the shadow price ofb2is The dual problemBy similar reasoning to that we used in the proof of the Lagrangian sufficiency theorem,we have that for any (b)= maxx Xg(x)=bf(x)=maxx Xg(x)=b[f(x)+ (b g(x))] maxx X[f(x)+ (b g(x))].The right hand side provides an upper bound on (b). We make this upper bound as tightas possible by minimizing over , so that we have (b) min maxx X[f(x)+ (b g(x))].The right hand side above defines an Optimization problem, called thedual original problem is called theprimal problem. If the primal can be solved by theLagrangian method then the inequality above is an equality and the solution to the dualproblem is just (b).


Related search queries