- Simulation Techniques and Applications
- Advanced Multi-Objective Optimization Algorithms
- Risk and Portfolio Optimization
- Statistical Methods and Inference
- Advanced Statistical Process Monitoring
- Financial Risk and Volatility Modeling
- Stochastic processes and financial applications
- Advanced Database Systems and Queries
- Advanced Queuing Theory Analysis
- Gaussian Processes and Bayesian Inference
- Multi-Criteria Decision Making
- Advanced Bandit Algorithms Research
- Optimal Experimental Design Methods
- Probabilistic and Robust Engineering Design
- Insurance, Mortality, Demography, Risk Management
- Spreadsheets and End-User Computing
- Probability and Risk Models
- Machine Learning and Algorithms
- Fuzzy Systems and Optimization
- Fault Detection and Control Systems
- Healthcare Operations and Scheduling Optimization
- Advanced Control Systems Optimization
- Credit Risk and Financial Regulations
- Software Reliability and Analysis Research
- Reservoir Engineering and Simulation Methods
Fudan University
2017-2024
Hong Kong University of Science and Technology
2007-2020
University of Hong Kong
2007-2020
City University of Hong Kong
2014-2018
Wright State University
2006
Northwestern University
2003-2005
We propose an optimization-via-simulation algorithm, called COMPASS, for use when the performance measure is estimated via a stochastic, discrete-event simulation, and decision variables are integer ordered. prove that COMPASS converges to set of local optimal solutions with probability 1 both terminating steady-state fully constrained problems partially or unconstrained under mild conditions.
When there is parameter uncertainty in the constraints of a convex optimization problem, it natural to formulate problem as joint chance constrained program (JCCP), which requires that all be satisfied simultaneously with given large probability. In this paper, we propose solve JCCP by sequence approximations. We show solutions approximations converge Karush-Kuhn-Tucker (KKT) point under certain asymptotic regime. Furthermore, use gradient-based Monte Carlo method
Industrial Strength COMPASS (ISC) is a particular implementation of general framework for optimizing the expected value performance measure stochastic simulation with respect to integer-ordered decision variables in finite (but typically large) feasible region defined by linear-integer constraints. The consists global-search phase, followed local-search and ending “clean-up” (selection best) phase. Each phase provides probability 1 convergence guarantee as effort increases without bound:...
Quantiles of a random performance serve as important alternatives to the usual expected value. They are used in financial industry measures risk and service quality. To manage quantile performance, we need know how changes input parameters affect output quantiles, which called sensitivities. In this paper, show that sensitivities can be written form conditional expectations. Based on conditional-expectation form, first propose an infinitesimal-perturbation-analysis (IPA) estimator. The IPA...
Conditional value at risk (CVaR) is both a coherent measure and natural statistic. It often used to the associated with large losses. In this paper, we study how estimate sensitivities of CVaR using Monte Carlo simulation. We first prove that sensitivity can be written as conditional expectation for general loss distributions. then propose an estimator analyze its asymptotic properties. The numerical results show works well. Furthermore, demonstrate use solve optimization problems objective...
Response surface methodology (RSM) is a widely used method for simulation optimization. Its strategy to explore small subregions of the decision space in succession instead attempting entire single attempt. This especially suitable complex stochastic systems where little knowledge available. Although RSM popular practice, its current applications optimization treat experiments same as real experiments. However, unique properties make traditional inappropriate two important aspects: (1) It...
Fully sequential ranking-and-selection (R&S) procedures to find the best from a finite set of simulated alternatives are often designed be implemented on single processor. However, parallel computing environments, such as multi-core personal computers and many-core servers, becoming ubiquitous easily accessible for ordinary users. In this paper, we propose two types fully that can used in environments. We call them vector-filling asymptotic selection procedures, respectively. Extensive...
In this paper, we briefly review the development of ranking-and-selection (R&S) in past 70 years, especially theoretical achievements and practical applications last 20 years. Different from frequentist Bayesian classifications adopted by Kim Nelson (2006b) Chick (2006) their articles, categorize existing R&S procedures into fixed-precision fixed-budget procedures, as Hunter (2017). We show that these two categories essentially differ underlying methodological formulations, i.e., they are...
Estimating quantile sensitivities is important in many optimization applications, from hedging financial engineering to service-level constraints inventory control more general chance stochastic programming. Recently, Hong (Hong, L. J. 2009. sensitivities. Oper. Res. 57 118–130) derived a batched infinitesimal perturbation analysis estimator for sensitivities, and Liu (Liu, G., Hong. Kernel estimation of Naval Logist. 56 511–525) kernel estimator. Both these estimators are consistent with...
Optimization via simulation (OvS) is an exciting and fast developing area for both research practice. In this article, we introduce three types of OvS problems: the R&S problems, continuous problems discrete discuss issues current development these problems. We also give some suggestions on how to use commercial software in
Value-at-risk (VaR) and conditional value-at-risk (CVaR) are two widely used risk measures of large losses employed in the financial industry for management purposes. In practice, loss distributions typically do not have closed-form expressions, but they can often be simulated (i.e., random observations distribution may obtained by running a computer program). Therefore, Monte Carlo methods that design simulation experiments utilize estimation, sensitivity analysis, optimization VaRs CVaRs....
Optimization via simulation (OvS) is an exciting and fast developing area for both research practice. In this article, we introduce three types of OvS problems: the R&S problems, continuous problems discrete discuss issues current development these problems. We also give some suggestions on how to use commercial software in
Many procedures have been proposed in the literature to select simulated alternative with best mean performance from a finite set of alternatives. Among these procedures, frequentist are typically designed under either subset-selection (SS) formulation or indifference-zone (IZ) formulation. Both formulations may encounter problems when goal is unique for any configuration means. In particular, SS return subset that contains more than one alternative, and IZ hinge on relationship between...
Random search algorithms are often used to solve discrete optimization-via-simulation (DOvS) problems. The most critical component of a random algorithm is the sampling distribution that guide allocation effort. A good can balance trade-off between effort in searching around current best solution (which called exploitation) and largely unknown regions exploration). However, for DOvS problems have difficulties balancing this seamless way. In paper we propose new scheme derives from fast...
We propose an adaptive hyperbox algorithm (AHA), which is instance of a locally convergent, random search for solving discrete optimization via simulation problems. Compared to the COMPASS algorithm, AHA more efficient in high-dimensional By analyzing models behavior and AHA, we show why slows down significantly as dimension increases, whereas less affected. Both can be used local within Industrial Strength framework, consists global phase, final cleanup phase. compare performance framework...
Integrated assessment models that combine geophysics and economics features are often used to evaluate compare global warming policies. Because there typically profound uncertainties in these models, a simulation approach is used. This requires the distribution of uncertain parameters clearly specified. However, this impossible because significant amount ambiguity (e.g., estimation error) specifying distribution. In paper, we adopt widely multivariate normal model parameters. assume mean...
Nested estimation involves estimating an expectation of a function conditional via simulation. This problem has late received increasing attention amongst researchers due to its broad applicability particularly in portfolio risk measurement and pricing complex derivatives. In this paper, we study kernel smoothing approach. We analyze asymptotic properties, present efficient algorithms for practical implementation. While results suggest that the approach is preferable over nested simulation...
Ranking and selection (R&S) aims to select the best alternative with largest mean performance from a finite set of alternatives. Recently, considerable attention has turned toward large-scale R&S problem which involves large number Ideal procedures should be sample optimal; that is, total size required deliver an asymptotically nonzero probability correct (PCS) grows at minimal order (linear order) in alternatives, k. Surprisingly, we discover naïve greedy procedure, keeps sampling...
Vision Large Language Models (VLLMs) integrate visual data processing, expanding their real-world applications, but also increasing the risk of generating unsafe responses. In response, leading companies have implemented Multi-Layered safety defenses, including alignment training, system prompts, and content moderation. However, effectiveness against sophisticated adversarial attacks remains largely unexplored. this paper, we propose MultiFaceted Attack, a novel attack framework designed to...
Statistical Ranking and Selection (R&S) is a collection of experiment design analysis techniques for selecting the "population" with largest or smallest mean performance from among finite set alternatives. R&S procedures have received considerable research attention in stochastic simulation community, they been incorporated commercial software. One ways that are evaluated compared via expected number samples (often replications) must be generated to reach decision. In this paper we argue...