Validating Sample Average Approximation Solutions with Negatively Dependent Batches
Abstract
Sampleaverage approximations (SAA) are a practical means of finding approximate solutions of stochastic programming problems involving an extremely large (or infinite) number of scenarios. SAA can also be used to find estimates of a lower bound on the optimal objective value of the true problem which, when coupled with an upper bound, provides confidence intervals for the true optimal objective value and valuable information about the quality of the approximate solutions. Specifically, the lower bound can be estimated by solving multiple SAA problems (each obtained using a particular sampling method) and averaging the obtained objective values. Stateoftheart methods for lowerbound estimation generate batches of scenarios for the SAA problems independently. In this paper, we describe sampling methods that produce negatively dependent batches, thus reducing the variance of the sampleaveraged lower bound estimator and increasing its usefulness in defining a confidence interval for the optimal objective value. We provide conditions under which the new sampling methods can reduce the variance of the lower bound estimator, and present computational results to verify that our scheme can reduce the variance significantly, by comparison with the traditional Latin hypercube approach.
1 Introduction
Stochastic programming provides a means for formulating and solving optimization problems that contain uncertainty in the model. When the number of possible scenarios for the uncertainty is extremely large or infinite, sampleaverage approximation (SAA) provides a means for finding approximate solutions for a reasonable expenditure of computational effort. In SAA, a finite number of scenarios is sampled randomly from the full set of possible scenarios, and an approximation to the full problem of reasonable size is formulated from the sampled scenarios and solved using standard algorithms for deterministic optimization (such as linear programming solvers). Solutions obtained from SAA procedures are typically suboptimal. Substantial research has been done in assessing the quality of an obtained solution (or a set of candidate solutions), and in understanding how different sampling methods affect the quality of the approximate solution.
Important information about a stochastic optimization problem is provided by the optimality gap (Mak et al. 1999, Bayraksan and Morton 2006), which provides a bound on the difference between the objective value for the computed SAA solution and the true optimal objective value. An estimate of the optimality gap can be computed using upper and lower bounds on the true optimal objective value. Mak et al. (1999) proves that the expected objective value of an SAA problem is a lower bound of the objective of the true solution, and that this expected value approaches the true optimal objective value as the number of scenarios increases. We can estimate this lower bound (together with confidence intervals) by solving multiple SAA problems, a task that can be implemented in parallel in an obvious way. An upper bound can be computed by taking a candidate solution and evaluating the objective by sampling from the scenario set, typically taking a larger number of samples than were used to set up the SAA optimization problem for computing .
Much work has been done to understand the quality of SAA solutions obtained from Monte Carlo (MC) sampling, Latin hypercube (LH) sampling (McKay et al. 1979), and other methods. MC generates independent identically distributed scenarios where the value of each variable is picked independently from its corresponding distribution. The simplicity of this method has made it an important practical tool; it has been the subject of much theoretical and empirical research. Many results about the asymptotic behavior of optimal solutions and values of MC have been obtained; see (Shapiro et al. 2009, Chapter 5) for a review. By contrast with MC, LH stratifies each dimension of the sample space in such a way that each strata has the same probability, then samples the scenarios so that each strata is represented in the scenario sample set. This procedure introduces a dependence between the different scenarios of an individual SAA problem. The sample space is in some sense “uniformly” covered on a pervariable basis, thus ensuring that there are no large unsampled regions and leading to improved performance. Linderoth et al. (2006) provides empirical results showing that the bias and variance of a lower bound obtained by solving multiple SAA problems constructed with LH sampling is considerably smaller than the statistics obtained from an MCbased procedure. Theoretical results about the asymptotic behavior of these estimates were provided later by Homemde Mello (2008). Other results on the performance of LH have been obtained, including results on large deviations (Drew and Homemde Mello 2005), rate of convergence to optimal values (Homemde Mello 2008), and bias reduction of approximate optimal values (Freimer et al. 2012), all of which demonstrate the superiority of LH over MC. This favorable empirical and theoretical evidence makes LH the current stateoftheart method for obtaining highquality lower bounds and optimality gaps via SAA. In this paper, we build on the ideas behind the LH method to obtain LH variants with even better variance properties.
In the past, when solving a set of SAA problems to obtain a lowerbound estimate of the true optimal objective value, each batch of scenarios determining each SAA was chosen independently of the other batches. In this paper, we introduce two approaches to sampling — sliced Latin hypercube (SLH) sampling and sliced orthogonalarray Latin hypercube (SOLH) sampling — that yield better estimates of the lower bound by imposing negative dependencies between the batches in the different SAA approximations. These approaches not only stratify within each batch (as in LH) but also between all batches. The SLH approach is easy to implement, while the SOLH approach provides better variance reduction. With these methods, we can significantly reduce the variance of the lower bound estimator even if the size of each SAA problem or the number of SAA problems were kept the same, which can be especially useful when solving each SAA problem is time consuming or when computing resources are limited. We will provide theoretical results analyzing the variance reduction properties of both approaches, and present empirical results demonstrating their efficacy across a variety of stochastic programs studied in the literature. Sliced Latin hypercube sampling was introduced first in Qian (2012) and has proven to be useful for the collective evaluation of computer models and ensembles of computer models.
Here we briefly outline the rest of our paper. The next section begins with a brief description of how the optimality gap can be estimated (§ 2.1), a review of Latin hypercube sampling (§ 2.2) and an introduction of functional analysis of variance (§ 2.3). Section 3 focuses on sliced Latin hypercube sampling. It outlines the construction of dependent batches (§ 3.1), describes theoretical results of variance reduction based on a certain monotonicity condition (§ 3.2), applies the results to twostage stochastic linear program (§ 3.3), and finally studies the relation between the lower bound estimation and numerical integration (§ 3.4). Section 4 reviews orthogonal arrays and introduces a method to incorporate these arrays into the sliced Latin hypercube sampling, which leads to stronger betweenbatch negative dependence. The next two sections deal with our computational experiments. Section 5 describes the setup and some of the implementation details, while Section 6 describes and analyzes the performance of the new sampling methods in the lower bound estimation problem. We end the paper in Section 7 with a summary of our results and a discussion of possible future research.
2 Preliminaries
2.1 Stochastic Programs and Lower Bound Estimators
We consider a stochastic program of the form
(1) 
where is a compact feasible set, is a vector of decision variables, is a vector of random variables, and is a realvalued measurable function. Unless stated otherwise, we assume that is a random vector with uniform distribution on and that is the expectation with respect to the distribution of . If has a different distribution on , we can transform it into a uniform random vector on as long as are either (i) independent discrete or continuous random variables or (ii) dependent random variables which are absolutely continuous (Rosenblatt 1953).
Problem (1) may be extremely challenging to solve directly, since the evaluation of involves a highdimensional integration. We can replace (1) with the following approximation:
(2) 
where are scenarios sampled from the uniform distribution on . The function is called a sampleaverage approximation (SAA) to the objective in (1). In this paper we will frequently use the term SAA problem to refer to equation (2). We use and to denote a solution and the optimal value of the true problem (1), respectively, while and denote the solution and optimal value of the SAA problem (2), respectively.
We introduce here some items of terminology that are used throughout the remainder of the paper. Let denote an matrix with in (2) as its th row. Hence, represents a batch of scenarios that define an SAA problem. We will refer to as the th scenario in . We use to denote the th column of , and to denote the entry of , that is, the th entry in the th scenario in .
We can express the dependence of in (2) on explicitly by writing this quantity as , where . Given a distribution over the matrices where are each drawn from the uniform distribution but not necessarily independently, it is well known and easy to show that the expectation with respect to the matrices giving us a lower bound of the true optimal value (Norkin et al. 1998, Mak et al. 1999). can be estimated as follows. Generate independent batches and compute the optimal value by solving subproblem (2) for each , . From Mak et al. (1999), a lower bound estimate of is
(3) 
2.2 Latin Hypercube Sampling
Latin hypercube sampling, which stratifies sample points along each dimension (McKay et al. 1979), has been used in numerical integration for many years. It has been shown that the variance of the mean output of a computer experiment under Latin hypercube sampling can be much lower than for experiments based on Monte Carlo methods (McKay et al. 1979, Stein 1987, Loh 1996). Let and denote the approximate optimal value when the in are generated using Monte Carlo and Latin hypercube sampling, respectively. Homemde Mello (2008) showed that the asymptotic variance of is smaller than the variance of under some fairly general conditions. Extensive numerical simulations have provided empirical demonstrations of the superiority of Latin hypercube sampling (Homemde Mello 2008, Linderoth et al. 2006).
To define Latin hypercube sampling, we start with some useful notation. Given an integer , we define . Given an integer , the notation denotes the set . For a real number , denotes the smallest integer no less than . A “uniform permutation on a set of integers” is obtained by randomly taking a permutation on the set, with all permutations being equally probable.
We have the following definition.

An array is a Latin hypercube if each column of is a uniform permutation on . Moreover, is an ordinary Latin hypercube if all its columns are generated independently.
Using an ordinary Latin hypercube , an matrix with scenarios that defines an SAA problem is obtained as follows (McKay et al. 1979):
(4) 
where all are random variables, and the quantities and the are mutually independent. We refer the matrix thus constructed as an ordinary Latin hypercube design.
We now introduce a different way of looking at design matrices that will be useful when we discuss extensions to sliced Latin hypercube designs in later sections. We can write a design matrix as
(5) 
where
When is an ordinary Latin hypercube design, is an ordinary Latin hypercube and corresponds to in (4) for all and . By the properties of an ordinary Latin hypercube design, the entries in are mutually independent, and and are independent. We refer to as the underlying array of .
The lower bound on can be estimated from (3) by taking independently generated ordinary Latin hypercube designs (Linderoth et al. 2006). We denote this sampling scheme by ILH and denote the estimator obtained from (3) by .
To illustrate the limitations of the ILH scheme, Figure 1 displays three independent ordinary Latin hypercube designs generated under ILH with . Scenarios from each threedimensional design are denoted by the same symbol, and are projected onto each of the three bivariate planes. The dashed lines stratify each dimension into three partitions. For any design, each of the these three intervals will contain exactly one scenario in each dimension. This scheme covers the space more “uniformly” than three scenarios that are picked identically and independently from the uniform distribution, as happens in Monte Carlo schemes. However, the combination of points from all three designs does not cover the space particularly well, which gives some cause for concern, since all designs are being used in the lower bound estimation. Specifically, when we combine the three designs together (to give nine scenarios in total), it is usually not the case that each of the nine equally spaced intervals of (0,1] contains exactly one scenario in any dimension. This shortcoming provides the intuition behind the sliced Latin hypercube (SLH) design, which we will describe in the subsequent sections.
2.3 Functional ANOVA Decomposition
In order to understand the asympototic properties of estimators arising from LatinHypercube based samples, it is necessary to review the functional analysis of variance decomposition (Owen 1994), also known as functional ANOVA. Let represent the axes of associated with an input vector defined in Section 2.1. Let denote the uniform distribution for , with . For , let denote a vector consisting of for . Define
(6) 
where integrates out all components except for those included in , and is a proper subset of . Hence, we have that

is the mean of ;

is the main effect function for factor , and

is the bivariate interaction function between factors and .
3 Sliced Latin Hypercube Sampling
Instead of generating independently for each SAA subproblem, we propose a new scheme called sliced Latin hypercube (SLH) sampling that generates a family of correlated designs. An SLH design (Qian 2012) is a matrix that can be partitioned into separate LH designs, represented by the matrices , , each having the same properties as ILH, with respect to SAA. SLH was originally introduced to aid in the collective evaluation of computer models, but here we demonstrate its utility in creating multiple SAA problems to solve.
Let denote the lower bound estimator of under SLH. Because the individual designs , are LH designs, we have that . Consider two distinct batches of scenarios and for any and . We will show that when fulfills a certain monotonicity condition, and are negatively dependent under SLH. Compared with ILH, SLH introduces betweenbatch negative dependence while keeping the withinbatch structure intact. As a result, we expect a lowervariance estimator: .
3.1 Construction
Algorithm 1 describes the construction of the matrices , for the SLH design. We use notation for the th column of , for the th scenario of , and for the th entry in , for , , and .

Step 1. Randomly generate independent ordinary Latin hypercubes , . Denote the th column of by , for .

Step 2. For , do the following independently: Replace all the s in by a random permutation on , for .

Step 3. For , obtain the th entry of as follows:
(8) where the are random variables that are mutually independent of the .
By vertically stacking the matrices , we obtain the matrix representing the SLH design, as defined in Qian (2012).
As in (5), we can express each as
(9) 
where with and . We have the following proposition regarding properties of the SLH design, including dependence of the batches. (This result is closely related to (Qian 2012, Lemma 2).)
Proposition 3.1
Consider the SLH design with , constructed according to Algorithm 1, with and , defined as in (9). The following are true.

, are independent ordinary Latin hypercubes.

and are independent, for each .

Within each , , the are mutually independent random variables.

For with , is dependent on if and only if ;

The , are ordinary Latin hypercube designs, but they are not independent.
Proof.

According to (9) and Step 2 in the above construction, is the same as in Step 1 prior to the replacement step, and the result follows.

For each and , the quantities

It suffices to show that and are dependent if and only if . That is true because and are selected from the same when .

The result follows directly from (i), (ii), (iv), and the definition of the ordinary Latin hypercube design.
∎
Figure 2 displays the bivariate projection of the three ordinary Latin hypercube designs, each denoted by a different symbol, which are generated under an SLH scheme. For each design, each of the three equally spaced intervals of contains exactly one scenario in each dimension. In contrast to Figure 1, when we combine the three designs together, each of the nine equally spaced intervals of contains exactly one scenario in any one dimension.
3.2 Monotonicity Condition
We derive theoretical results to show under a monotonicity condition that will be defined shortly.

We say that two random variables and are negatively quadrant dependent if

Let denote the underlying ordinary Latin hypercube of in (5) such that . Let given , where and , with such that for and . The function is said to be monotonic if the following two conditions hold: (a) for all , is monotonic in each argument of , and (b) the direction of the monotonicity in each argument of is consistent across all .

Consider . Let . When
we have , , , , , and and
Thus, is increasing in and while it is decreasing in the other s, for . This is true for any underlying ordinary Latin hypercube , since and are always associated with values in which are between (0,1/3] in this example. By definition, is monotonic.
The monotonicity condition can be checked by directly studying the function , but it can also be shown to be satisfied if the stochastic program has certain nice properties. Later we will prove some sufficient conditions for twostage stochastic linear programs and give a simple argument to show how some problems from the literature have the monotonicity property.
Qian (2012) proves that the function values of any two scenarios in a sliced Latin hypercube design are negatively quadrant dependent. The next lemma extends this result, showing the function values of any two batches in a sliced Latin hypercube design are also negatively quadrant dependent, under the monotonicity assumption on .
Lemma 3.2
Proof.
Given two ordinary Latin hypercubes and in (9), let and denote two slices generated by (8). Since the underlying Latin hypercubes are fixed for and , the only random parts in the definition are and . Define and , where and are matrices with the th entry defined as and such that for and . By Proposition 3.1 (iii) and (iv), we can find independent pairs of random variables: for , . By (Lehmann 1966, Theorem 2), the monotonicity assumption of , and the proof of (Qian 2012, Theorem 1), we have
which is equivalent to
Taking expectations of both sides gives
The last equality holds because and are independent, by Proposition 3.1 (i). ∎
The next result is an immediate consequence of Lemma 3.2. It indicates that the variance of the lowerbound estimator could be reduced by using SLH, when is monotonic.
Theorem 3.3
Consider two lower bound estimators and in (3) obtaind under ILH and SLH, respectively. Suppose in monotonic, then for any and , we have that
Even if the monotonicity condition does not hold, we can prove similar statements about the asymptotic behavior of the variance of ILH and SLH.
Theorem 3.4 gives an asymptotic result that shows the same conclusion can be drawn as in Theorem 3.3 even if the monotonicity condition does not hold.
Theorem 3.4
Let denote an Latin hypercube design based on a Latin hypercube such that . Let . Suppose is well defined. As with fixed, the following are true.

.

, where is the main effect function of with respect to .

If is additive, where , then .
Proof.
(i) When are sampled independently under ILH, for any . Thus, we have .
(ii) When are sampled under SLH, we have
(11) 
where the last equality in (3.2) holds because batches are exchangeable. Let and . Without loss of generality, assume . We have
(12) 
where the secondlast equality is from (Qian 2009, Lemma 1). The result follows by substituting (12) into (3.2). The functional ANOVA decomposition is properly defined because the quantities are independent, according to Proposition 3.1 (iii).
(iii) Assuming again that , we have
(13) 
The second equality holds because is the same regardless of the underlying ordinary Latin hypercube when is additive. The third equation holds due to the functional ANOVA decomposition on . We complete the proof by combining (13) with the result in (ii). ∎
3.3 TwoStage Linear Program
We now discuss the theoretical properties of SLH for twostage stochastic linear programs. Consider problems of the form
(14) 
where is a polyhedron and
(15) 
and . The problem has fixed recourse since the recourse matrix does not depend on the random variable . Defining to be the function , we see that (14) is a special case of (1). Let and . By (15), is a decreasing function of any entry in , for any . Furthermore, is a decreasing function of any entry in if is nonpositive, and an increasing function of any entry in if is nonnegative.
By LP duality, we have
and hence is an increasing function of any entry in for any . We conclude that is monotonic in each component of if the recourse matrix is fixed.
Lemma 3.5
Proof.
For any ordinary Latin hypercube design , let be the arg min of the approximation problem, that is,
where is the th scenario of . Without loss of generality, increase only to obtain a new design with scenarios . Let be the arg min of the approximation problem with design , that is,
If either (i) or (ii) is satisfied, the value of does not affect whether is increasing or decreasing in . Supposing that is increasing in , we have
which implies that is increasing in . Similarly, if is decreasing in , then
which implies is decreasing in . ∎

Consider the newsvendor problem from Freimer et al. (2012), which can be expressed as a twostage stochastic program. In the first stage, choose an order quantity . After demand has been realized, we decide how much of the available stock to sell. Assume that demand is uniformly distributed on the interval , and there is a shortage cost and an overage cost . The second stage problem is
Since the firststage cost is zero, the twostage stochastic program is
The optimal value is with solution .
Based on a sample of demands , the approximated optimal value is given by
where . The optimal solution is the th order statistic of .
Scheme 2 ILH 0.1003 (1.83E2) 0.1002 (1.31E2) 0.1000 (9.23E3) SLH 0.0999 (3.71E3) 0.1000 (1.31E3) 0.1000 (4.49E4) 20 ILH 0.1201 (6.99E4) 0.1200 (5.01E4) 0.1200 (3.70E4) SLH 0.1200 (1.43E4) 0.1200 (4.87E5) 0.1200 (1.72E5) 200 ILH 0.1200 (2.18E5) 0.1200 (1.60E5) 0.1200 (1.14E5) SLH 0.1200 (4.40E6) 0.1200 (1.59E6) 0.1200 (5.60E7) Table 1: Means and standard errors (in parentheses) with 1000 replicates Let , for which . Table 1 gives means and standard errors for the estimators of for several values of and . This table shows that SLH reduces the variance of significantly when compared with ILH. Analytically, we have
where and is an arbitrary underlying Latin hypercube for . We notice that for any , is decreasing in , increasing in and monotonic in (the direction depends on ). Thus, is monotonic, which can alternatively be checked by applying Lemma 3.5. By Theorem 3.3, we should have .
We notice that while , a fact that can be explained by Theorem 3.4 (iii), since the newsvendor problem only has one random variable and is additive.
3.4 Discrete Random Variables
Theorems 3.3 and 3.4 confirm the effectiveness of sliced Latin hypercube designs in reducing the variance of lowerbound estimates in SAA. However, the assumptions in those theorems limit their applicability to fairly specialized problems. Theorem 3.3 does not apply to twostage problem in (14) with random recourse. Theorem 3.4 does not apply when , which is a more practical assumption than . In this section, we consider the case in which are discrete random variables, as discussed by Homemde Mello (2008). In fact, we assume to be independent discrete random variables. We plan to show that estimating the lower bound of is almost equivalent to a numerical integration problem. Several existing results regarding numerical integration provide us with tools for studying effects of different sampling schemes for lower bound estimation more generally.
Assumption 3.1
For each , with probability one (denoted “w.p. 1”).
Assumption 3.2
The feasible set is compact and polyhedral; the function is convex piecewise linear for every value of ; and the distribution of has finite support.
Assumption 3.1 holds under Latin hypercube sampling if for every , by (Loh 1996, Theorem 3). Assumption 3.2 holds in practice for stochastic linear programs in which the random vector is discretized. Let denote the set of optimal solutions of (1). Based on both assumptions, Homemde Mello (2008) shows the consistency of the approximated optimal solution in (2).
Proposition 3.6
Now let denote the cumulative distribution function of for . Define the inverse of as , where is the support of