Example: bachelor of science

UNCONSTRAINED MULTIVARIABLE OPTIMIZATION

UNCONSTRAINED MULTIVARIABLE OPTIMIZATION .. Methods Using Function Values Only 183 .. Methods That Use First Derivatives 189 .. Newton's Method 197 .. Quasi-Newton Methods 208 References .. 210 .. Supplementary References 211 Problems .. 211 182 PART I1 : OPTIMIZATION Theory and Methods THE NUMERICAL OPTIMIZATION of general nonlinear MULTIVARIABLE objective func- tions requires efficient and robust techniques. Efficiency is important because these problems require an iterative solution procedure, and trial and error becomes impractical for more than three or four variables.

At point 1, f(x) is greater than f at points .2 or 3. fixed for a given size simplex. Let us use a function of two variables to illustrate the procedure. At each iteration, to minimize f(x), f(x) is evaluated at each of three vertices of the triangle. The direction of search is oriented away from the point with the high-

Tags:

  Points, Fixed, Iteration

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of UNCONSTRAINED MULTIVARIABLE OPTIMIZATION

1 UNCONSTRAINED MULTIVARIABLE OPTIMIZATION .. Methods Using Function Values Only 183 .. Methods That Use First Derivatives 189 .. Newton's Method 197 .. Quasi-Newton Methods 208 References .. 210 .. Supplementary References 211 Problems .. 211 182 PART I1 : OPTIMIZATION Theory and Methods THE NUMERICAL OPTIMIZATION of general nonlinear MULTIVARIABLE objective func- tions requires efficient and robust techniques. Efficiency is important because these problems require an iterative solution procedure, and trial and error becomes impractical for more than three or four variables.

2 Robustness (the ability to achieve a solution) is desirable because a general nonlinear function is unpredictable in its behavior; there may be relative maxima or minima, saddle points , regions of con- vexity, concavity, and so on. In some regions the OPTIMIZATION algorithm may progress very slowly toward the optimum, requiring excessive computer time. For- tunately, we can draw on extensive experience in testing nonlinear programming algorithms for UNCONSTRAINED functions to evaluate various approaches proposed for the OPTIMIZATION of such functions.

3 In this chapter we discuss the solution of the UNCONSTRAINED OPTIMIZATION problem: Find: that minimizes Most effective iterative procedures alternate between two phases in the opti- mization. At iteration k, where the current x is xk, they do the following: 1. Choose a search direction sk 2. Minimize along that direction (usually inexactly) to find a new point where ak is a positive scalar called the step size. The step size is determined by an OPTIMIZATION process called a line search as described in Chapter 5.

4 In addition to 1 and 2, an algorithm must specify 3. The initial starting vector x0 = [x xs .. n;lT and 4. The convergence criteria for termination. From a given starting point, a search direction is determined, andfix) is mini- mized in that direction. The search stops based on some criteria, and then a new search direction is determined, followed by another line search. The line search can be carried out to various degrees of precision. For example, we could use a simple successive doubling of the step size as a screening method until we detect the opti- mum has been bracketed.]

5 At this point the screening search can be terminated and a more sophisticated method employed to yield a higher degree of accuracy. In any event, refer to the techniques discussed in Chapter 5 for ways to carry out the line search. The NLP (nonlinear programming) methods to be discussed in this chapter dif- fer mainly in how they generate the search directions. Some nonlinear program- ming methods require information about derivative values, whereas others do not use derivatives and rely solely on function evaluations.

6 Furthermore, finite differ- ence substitutes can be used in lieu of derivatives as explained in Section For differentiable functions, methods that use analytical derivatives almost always use less computation time and are more accurate, even if finite difference approxima- CHAPTER 6: UNCONSTRAINED MULTIVARIABLE OPTIMIZATION 183 tions are used. Symbolic codes can be employed to obtain analytical derivatives but this may require more computer time than finite differencing to get derivatives. For nonsrnooth functions, a function-values-only method may.

7 Be more successful than using a derivative-based method. We first describe some simple nonderivative methods and then present a series of methods that use derivative information. We also show how the nature of the objective function influences the effectiveness of the particular OPTIMIZATION algorithm. METHODS USING FUNCTION VALUES ONLY Some methods do not require the use of derivatives in determining the search direc- tion. Under some circumstances the methods described in this section can be used effectively, but they may be inefficient compared with methods discussed in subse- quent sections.

8 They have the advantage of being simple to understand and execute. Random Search A random search method simply selects a starting vector xO, evaluatesflx) at xO, and then randomly selects another vector x1 and evaluates flx) at xl. In effect, both a search direction and step length are chosen simultaneously. After one or more stages, the value of flxk) is compared with the best previous value of flx) from among the previous stages, and the decision is made to continue or terminate the procedure.

9 Variations of this form of random search involve randomly selecting a search direction and then minimizing (possibly by random steps) in that search direction as a series of cycles. Clearly, the optimal solution can be obtained with a probability of 1 only as k + oo but as a practical matter, if the objective function is quite flat, a suboptimal solution may be quite acceptable. Even though the method is inefficient insofar as function evaluations are concerned, it may provide a good starting point for another method.

10 You might view random search as an extension of the case study method. Refer to Dixon and James (1980) for some practical algorithms. Grid Search Methods of experimental design discussed in most basic statistics books can be applied equally well to minimizingflx) (see Chapter 2). You evaluate a series of points about a reference point selected according to some type of design such as the ones shown in Figure (for an objective function of two variables). Next you move to the point that improves the objective function the most, and repeat.


Related search queries