Example: stock market

Monte Carlo Sampling-Based Methods for Stochastic …

Monte Carlo Sampling-Based Methods for Stochastic OptimizationTito Homem-de-MelloSchool of BusinessUniversidad Adolfo Iba nezSantiago, uzin BayraksanIntegrated Systems EngineeringThe Ohio State UniversityColumbus, 22, 2014 AbstractThis paper surveys the use of Monte Carlo Sampling-Based Methods for Stochastic optimizationproblems. Such Methods are required when as it often happens in practice the model involvesquantities such as expectations and probabilities that cannot be evaluated exactly. While estimationprocedures via sampling are well studied in statistics, the use of such Methods in an optimizationcontext creates new challenges such as ensuring convergence of optimal solutions and optimal val-ues, testing optimality conditions, choosing appropriate sample sizes to balance the effort betweenoptim

Bene ting from advances in computational power as well as from new theoretical developments, the area of stochastic optimization has evolved considerably in the past few years, with many recent

Tags:

  Based, Methods, Sampling, Optimization, Stochastic, Sampling based methods for stochastic

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Monte Carlo Sampling-Based Methods for Stochastic …

1 Monte Carlo Sampling-Based Methods for Stochastic OptimizationTito Homem-de-MelloSchool of BusinessUniversidad Adolfo Iba nezSantiago, uzin BayraksanIntegrated Systems EngineeringThe Ohio State UniversityColumbus, 22, 2014 AbstractThis paper surveys the use of Monte Carlo Sampling-Based Methods for Stochastic optimizationproblems. Such Methods are required when as it often happens in practice the model involvesquantities such as expectations and probabilities that cannot be evaluated exactly. While estimationprocedures via sampling are well studied in statistics, the use of such Methods in an optimizationcontext creates new challenges such as ensuring convergence of optimal solutions and optimal val-ues, testing optimality conditions, choosing appropriate sample sizes to balance the effort betweenoptimization and estimation, and many other issues.

2 Much work has been done in the literatureto address these questions. The purpose of this paper is to give an overview of some of that work,with the goal of introducing the topic to students and researchers and providing a practical guidefor someone who needs to solve a Stochastic optimization problem with IntroductionUncertainty is present in many aspects of decision making. Critical data, such as future demandsfor a product or future interest rates, may not be available at the time a decision must be events such as machine failures or disasters may occur, and a robust planning processmust take such possibilities into account.

3 Uncertainty may also arise due to variability in data, asit happens for example in situations involving travel times on highways. Variability also producesuncertainty in physical and engineering systems, in which case the goal becomes, for example, toprovide a reliable design. Such problems fall into the realm ofstochastic optimization , an area thatcomprises modeling and methodology for optimizing the performance of systems while taking theuncertainty explicitly into somewhat generic formulation of a Stochastic optimization problem has the formminx X{g0(x) :=E[G0(x, )]|E[Gk(x, )] 0, k= 1,2.}

4 ,K},(SP)whereGk,k= 0,1,..,Kare extended real-valued functions with inputs being the decision vectorxand a random vector . We useK= 0 to denote (SP) with only deterministic , there are finitely many Stochastic constraints (K < ) but this does not need to be thecase. In fact, there can be uncountably many Stochastic constraints (see Section ). The set ofdeterministic constraintsxmust satisfy is denoted byX Rdxand Rd denotes the supportof , wheredxandd are the dimensions of the vectorsxand , respectively. We assume that has a known distribution,P, that is independent ofx, and the expectations in (SP), taken withrespect to the distribution of , are well-defined and finite for allx wide variety of problems can be cast as (SP) depending onK,XandGk,k= 0,1,2.

5 , example, in a two-stage Stochastic linear program with recourse,K= 0,X={Ax=b,x 0},andG0(x, ) =cx+h(x, ),whereh(x, ) is the optimal value of the linear programh(x, ) = miny Wy= r Tx, y , is a random vector that is comprised of random elements of q, W, Rand contrast, in a Stochastic program with a single probabilistic constraint ( ,P( A x b ) ),we haveK= 1,G0(x, ) =cxandG1(x, ) =I{ A x b } ,whereI{E}denotes the indicator function that takes value 1 if the eventEhappens and 0 otherwiseand (0,1) is a desired probability level.

6 In this case, is comprised of random elements of A and b . Here, the decision maker requires that the relationship A x bbe satisfied with probabilityno more than .Several other variants of (SP) exist; for example, the objective function may be expressed notas a classical expectation but in a different form, such as a value-at-risk or conditional value-at-risk involving the random variableG0(x, ) or multiple stages can be considered (see , for multistage Stochastic linear programs). There are also cases where the distribution of theunderlying random vector depends on the decision variablesxeven if such dependence is notknown explicitly; we shall discuss some of these variations later.

7 For now, we assume that (SP) isthe problem of interest, and thatK= 0 in (SP) so that we only haveX, the set of deterministicconstraints in (SP) the case of Stochastic constraints is dealt with in Section 6. To simplifynotation, we will drop the index 0 from the objective function in (SP). We will refer to (SP) asthe true optimization problem (as opposed to the approximating problems to be discussed in thesequel).2 Benefiting from advances in computational power as well as from new theoretical developments,the area of Stochastic optimization has evolved considerably in the past few years, with many recentapplications in areas such as energy planning, national security, supply chain management, healthcare, finance, transportation, revenue management, and many others.

8 New applications bring newchallenges, particularly concerning the introduction of more random variables to make the modelsmore realistic. In such situations, it is clearly impossible to enumerate all the possible outcomes,which precludes the computation of the expectations in (SP). As a very simple example, considera model withd independent random variables, each with only two possible alternatives; the totalnumber of scenarios is thus 2d , and so even for moderate values ofd it becomes impractical totake all possible outcomes into account.

9 In such cases,samplingtechniques are a natural tool Methods have been successfully used in many different applications of stochasticoptimization. Examples of such applications can be found in vehicle routing (Kenyon and Morton[128], Verweij et al. [236]), engineering design (Royset and Polak [206]), supply chain network design(Santoso et al. [213]), power generation and transmission (Jirutitijaroen and Singh [122]), and assetliability management (Hilli et al. [104]), among others. More recently, Byrd et al. [34] used thesetechniques in the context of machine learning.

10 The appeal of Sampling-Based Methods results fromthe fact that they often approximate well, with a small number of samples, problems that have avery large number of scenarios; see, for instance, Linderoth et al. [152] for numerical are multiple ways to use sampling Methods in problem (SP). A generic way of describingthem is to construct an approximating problem as follows. Consider a family{gN( )}of randomapproximations of the functiong( ), eachgN( ) being defined asgN(x) :=1NN j=1G(x, j),(1)where{ 1,.., N}is a sample from the distribution1of.


Related search queries