Stochastically constrained ranking and selection via SCORE.
Pasupathy, R.; Hunter, S. R.; Pujowidianto, N. A.; Lee, L. H.; and Chen, C.
ACM Transactions on Modeling and Computer Simulation, 25(1): 1–26. January 2015.
Journal version, Finalist, Winter Simulation Conference Best Theoretical Paper Award.
link
paper
doi
link
bibtex
abstract
@article{2015pashunetal,
Year = {2015},
Author = {R. Pasupathy and S. R. Hunter and N. A. Pujowidianto and L. H. Lee and {C.-H.} Chen},
Title = {Stochastically constrained ranking and selection via {SCORE}},
Journal = {ACM Transactions on Modeling and Computer Simulation},
shortjournal = {ACM Trans. Model. Comput. Simul.},
volume = {25},
number = {1},
month = {January},
articleno = {1},
pages = {1--26},
numpages = {26},
doi = {10.1145/2630066},
url_Link = {http://dl.acm.org/authorize?N41473},
url_Paper = {http://web.ics.purdue.edu/~hunter63/PAPERS/pre2014pashunetal.pdf},
abstract = {Consider the context of constrained simulation optimization (SO), that is, optimization problems where the objective and constraint functions are known through dependent Monte Carlo estimators. For solving such problems on large finite spaces, we provide an easily implemented sampling framework called SCORE (Sampling Criteria for Optimization using Rate Estimators) that approximates the optimal simulation budget allocation. We develop a general theory, but like much of the existing literature on ranking and selection, our focus is on SO problems where the distribution of the simulation observations is Gaussian. We first characterize the nature of the optimal simulation budget as a bilevel optimization problem. We then show that under a certain asymptotic limit, the solution to the bilevel optimization problem becomes surprisingly tractable and is expressed through a single intuitive measure, the score. We provide an iterative SO algorithm that repeatedly estimates the score and determines how the available simulation budget should be expended across contending systems. Numerical experience with the algorithm resulting from the proposed sampling approximation is very encouraging --- in numerous examples of constrained SO problems having 1,000 to 10,000 systems, the optimal allocation is identified to negligible error within a few seconds to one minute on a typical laptop computer. Corresponding times to solve the full bilevel optimization problem range from tens of minutes to several hours.},
keywords = {simulation optimization > single-objective > stochastically constrained > ranking and selection},
bibbase_note = {<span style="color: green">Journal version, Finalist, Winter Simulation Conference Best Theoretical Paper Award.</span>}}
Consider the context of constrained simulation optimization (SO), that is, optimization problems where the objective and constraint functions are known through dependent Monte Carlo estimators. For solving such problems on large finite spaces, we provide an easily implemented sampling framework called SCORE (Sampling Criteria for Optimization using Rate Estimators) that approximates the optimal simulation budget allocation. We develop a general theory, but like much of the existing literature on ranking and selection, our focus is on SO problems where the distribution of the simulation observations is Gaussian. We first characterize the nature of the optimal simulation budget as a bilevel optimization problem. We then show that under a certain asymptotic limit, the solution to the bilevel optimization problem becomes surprisingly tractable and is expressed through a single intuitive measure, the score. We provide an iterative SO algorithm that repeatedly estimates the score and determines how the available simulation budget should be expended across contending systems. Numerical experience with the algorithm resulting from the proposed sampling approximation is very encouraging — in numerous examples of constrained SO problems having 1,000 to 10,000 systems, the optimal allocation is identified to negligible error within a few seconds to one minute on a typical laptop computer. Corresponding times to solve the full bilevel optimization problem range from tens of minutes to several hours.
Multi-objective simulation optimization on finite sets: optimal allocation via scalarization.
Feldman, G.; Hunter, S. R.; and Pasupathy, R.
In Yilmaz, L.; Chan, W. K. V.; Moon, I.; Roeder, T. M. K.; Macal, C.; and Rossetti, M. D., editor(s),
Proceedings of the 2015 Winter Simulation Conference, pages 3610–3621, Piscataway, NJ, 2015. Institute of Electrical and Electronics Engineers, Inc.
2015 Winter Simulation Conference I-Sim Best Student Paper Award.
paper
doi
link
bibtex
abstract
@inproceedings{2015felhunpasWSC,
Year = {2015},
Author = {G. Feldman and S. R. Hunter and R. Pasupathy},
Title = {Multi-objective simulation optimization on finite sets: optimal allocation via scalarization},
Booktitle = {Proceedings of the 2015 Winter Simulation Conference},
Editor = {L. Yilmaz and W. K. V. Chan and I. Moon and T. M. K. Roeder and C. Macal and M. D. Rossetti},
Publisher = {Institute of Electrical and Electronics Engineers, Inc.},
Address = {Piscataway, NJ},
Pages = {3610--3621},
doi = {10.1109/WSC.2015.7408520},
url_Paper = {http://www.informs-sim.org/wsc15papers/412.pdf},
abstract = {We consider the multi-objective simulation optimization problem on finite sets, where we seek the Pareto set corresponding to systems evaluated on multiple performance measures, using only Monte Carlo simulation observations from each system. We ask how a given simulation budget should be allocated across the systems, and a Pareto surface retrieved, so that the estimated Pareto set minimally deviates from the true Pareto set according to a rigorously defined metric. To answer this question, we suggest scalarization, where the performance measures associated with each system are projected using a carefully considered set of weights, and the Pareto set is estimated as the union of systems that dominate across the weight set. We show that the optimal simulation budget allocation under such scalarization is the solution to a bi-level optimization problem, for which the outer problem is concave, but some inner problems are non-convex. We comment on the development of tractable approximations for use when the number of systems is large.},
keywords = {simulation optimization > multi-objective > ranking and selection},
bibbase_note = {<span style="color: green">2015 Winter Simulation Conference I-Sim Best Student Paper Award.</span>}}
We consider the multi-objective simulation optimization problem on finite sets, where we seek the Pareto set corresponding to systems evaluated on multiple performance measures, using only Monte Carlo simulation observations from each system. We ask how a given simulation budget should be allocated across the systems, and a Pareto surface retrieved, so that the estimated Pareto set minimally deviates from the true Pareto set according to a rigorously defined metric. To answer this question, we suggest scalarization, where the performance measures associated with each system are projected using a carefully considered set of weights, and the Pareto set is estimated as the union of systems that dominate across the weight set. We show that the optimal simulation budget allocation under such scalarization is the solution to a bi-level optimization problem, for which the outer problem is concave, but some inner problems are non-convex. We comment on the development of tractable approximations for use when the number of systems is large.
Optimal sampling laws for bi-objective simulation optimization on finite sets.
Hunter, S. R.; and Feldman, G.
In Yilmaz, L.; Chan, W. K. V.; Moon, I.; Roeder, T. M. K.; Macal, C.; and Rossetti, M. D., editor(s),
Proceedings of the 2015 Winter Simulation Conference, pages 3749–3757, Piscataway, NJ, 2015. Institute of Electrical and Electronics Engineers, Inc.
paper
doi
link
bibtex
abstract
@inproceedings{2015hunfelWSC,
Year = {2015},
Author = {S. R. Hunter and G. Feldman},
Title = {Optimal sampling laws for bi-objective simulation optimization on finite sets},
Booktitle = {Proceedings of the 2015 Winter Simulation Conference},
Editor = {L. Yilmaz and W. K. V. Chan and I. Moon and T. M. K. Roeder and C. Macal and M. D. Rossetti},
Publisher = {Institute of Electrical and Electronics Engineers, Inc.},
Address = {Piscataway, NJ},
Pages = {3749--3757},
doi = {10.1109/WSC.2015.7408532},
url_Paper = {http://www.informs-sim.org/wsc15papers/424.pdf},
abstract = {We consider the bi-objective simulation optimization (SO) problem on finite sets, that is, an optimization problem where for each ``system'' the two objective functions are estimated as output from a Monte Carlo simulation. The solution to this bi-objective SO problem is a set of non-dominated systems, also called the Pareto set. In this context, we derive the large deviations rate function for the rate of decay of the probability of a misclassification event as a function of the proportion of sample allocated to each competing system. Notably, we account for the presence of dependence between the estimates of each system's performance on the two objectives. The asymptotically optimal allocation maximizes the rate of decay of the probability of misclassification and is the solution to a concave maximization problem.},
keywords = {simulation optimization > multi-objective > ranking and selection}}
We consider the bi-objective simulation optimization (SO) problem on finite sets, that is, an optimization problem where for each ``system'' the two objective functions are estimated as output from a Monte Carlo simulation. The solution to this bi-objective SO problem is a set of non-dominated systems, also called the Pareto set. In this context, we derive the large deviations rate function for the rate of decay of the probability of a misclassification event as a function of the proportion of sample allocated to each competing system. Notably, we account for the presence of dependence between the estimates of each system's performance on the two objectives. The asymptotically optimal allocation maximizes the rate of decay of the probability of misclassification and is the solution to a concave maximization problem.
Comparing Message Passing Interface and MapReduce for large-scale parallel ranking and selection.
Ni, E. C.; Ciocan, D. F.; Henderson, S. G.; and Hunter, S. R.
In Yilmaz, L.; Chan, W. K. V.; Moon, I.; Roeder, T. M. K.; Macal, C.; and Rossetti, M. D., editor(s),
Proceedings of the 2015 Winter Simulation Conference, pages 3858–3867, Piscataway, NJ, 2015. Institute of Electrical and Electronics Engineers, Inc.
paper
doi
link
bibtex
abstract
@inproceedings{2015nicioetalWSC,
Year = {2015},
Author = {E. C. Ni and D. F. Ciocan and S. G. Henderson and S. R. Hunter},
Title = {Comparing {M}essage {P}assing {I}nterface and {M}ap{R}educe for large-scale parallel ranking and selection},
Booktitle = {Proceedings of the 2015 Winter Simulation Conference},
Editor = {L. Yilmaz and W. K. V. Chan and I. Moon and T. M. K. Roeder and C. Macal and M. D. Rossetti},
Publisher = {Institute of Electrical and Electronics Engineers, Inc.},
Address = {Piscataway, NJ},
Pages = {3858--3867},
doi = {10.1109/WSC.2015.7408542},
url_Paper = {http://www.informs-sim.org/wsc15papers/434.pdf},
abstract = {We compare two methods for implementing ranking and selection algorithms in large-scale parallel computing environments. The Message Passing Interface (MPI) provides the programmer with complete control over sending and receiving messages between cores, and is fragile with regard to core failures or messages going awry. In contrast, MapReduce handles all communication and is quite robust, but is more rigid in terms of how algorithms can be coded. As expected in a high-performance computing context, we find that MPI is the more efficient of the two environments, although MapReduce is a reasonable choice. Accordingly, MapReduce may be attractive in environments where cores can stall or fail, such as is possible in low-budget cloud computing.},
keywords = {simulation optimization > single-objective > ranking and selection > parallel}}
We compare two methods for implementing ranking and selection algorithms in large-scale parallel computing environments. The Message Passing Interface (MPI) provides the programmer with complete control over sending and receiving messages between cores, and is fragile with regard to core failures or messages going awry. In contrast, MapReduce handles all communication and is quite robust, but is more rigid in terms of how algorithms can be coded. As expected in a high-performance computing context, we find that MPI is the more efficient of the two environments, although MapReduce is a reasonable choice. Accordingly, MapReduce may be attractive in environments where cores can stall or fail, such as is possible in low-budget cloud computing.