Session VI
Speaker | Title tap/hover for abstract | Materials |
---|---|---|
Jana de WiljesUniversität Potsdam, DE | Sequential learning for decision support under uncertainty Sequential learning for decision support under uncertainty In many applicational areas there is a need to determine a control variable that optimizes a pre-specified objective. This problem is particularly challenging when knowledge on the underlying dynamics is subject to various sources of uncertainty. A scenario such as that arises for instance in the context of therapy individualization to improve the efficacy and safety of medical treatment. Mathematical models describing the pharmacokinetics and pharmacodynamics of a drug together with data on associated biomarkers can be leveraged to support decision-making by predicting therapy outcomes. We present a continuous learning strategy which follows a novel sequential Monte Carlo tree search approach and explore how the underlying uncertainties reflect in the approximated control variable. |
video slides |
Björn SprungkTU Freiburg, DE | Noise-level robust Monte Carlo methods for Bayesian inference with infomative data Noise-level robust Monte Carlo methods for Bayesian inference with infomative data The Bayesian approach to inverse problems provides a rigorous framework for the incor-poration and quantification of uncertainties in measurements, parameters and models. However, sampling from or integrating w.r.t. the resultung posterior measure can become computationally challenging. In recent years, a lot of effort has been spent on deriving dimension-independent methods and to combine efficient sampling strategies with multilevel or surrogate methods in order to reduce the computational burden of Bayesian inverse problems. In this talk, we are interested in designing numerical methods which are robust w.r.t. the size of the observational noise, i.e., methods which behave well in case of concentrated posterior measures. The concentration of the posterior is a highly desirable situation in practice, since it relates to informative or large data. However, it can pose as well a significant computational challenge for numerical methods based on the prior or reference measure. We propose to employ the Laplace approximation of the posterior as the base measure for numerical integration in this context. The Laplace approximation is a Gaussian measure centered at the maximum a-posteriori estimate (MAPE) and with covariance matrix depending on the Hessian of the log posterior density at the MAPE. We discuss convergence results of the Laplace approximation in terms of the Hellinger distance and analyze the efficiency of Monte Carlo methods based on it. In particular, we show that Laplace-based importance sampling and quasi-Monte-Carlo as well as Laplace-based Metropolis-Hastings algorithms are robust w.r.t. the concentration of the posterior for large classes of posterior distributions and integrands whereas prior-based Monte Carlo sampling methods are not. |
video slides |
Video Recordings
Jana de Wiljes: Sequential learning for decision support under uncertainty
Abstract: In many applicational areas there is a need to determine a control variable that optimizes a pre-specified objective. This problem is particularly challenging when knowledge on the underlying dynamics is subject to various sources of uncertainty. A scenario such as that arises for instance in the context of therapy individualization to improve the efficacy and safety of medical treatment. Mathematical models describing the pharmacokinetics and pharmacodynamics of a drug together with data on associated biomarkers can be leveraged to support decision-making by predicting therapy outcomes. We present a continuous learning strategy which follows a novel sequential Monte Carlo tree search approach and explore how the underlying uncertainties reflect in the approximated control variable.
Björn Sprungk: Noise-level robust Monte Carlo methods for Bayesian inference with infomative data
Abstract: The Bayesian approach to inverse problems provides a rigorous framework for the incor-poration and quantification of uncertainties in measurements, parameters and models. However, sampling from or integrating w.r.t. the resultung posterior measure can become computationally challenging. In recent years, a lot of effort has been spent on deriving dimension-independent methods and to combine efficient sampling strategies with multilevel or surrogate methods in order to reduce the computational burden of Bayesian inverse problems.
In this talk, we are interested in designing numerical methods which are robust w.r.t. the size of the observational noise, i.e., methods which behave well in case of concentrated posterior measures. The concentration of the posterior is a highly desirable situation in practice, since it relates to informative or large data. However, it can pose as well a significant computational challenge for numerical methods based on the prior or reference measure. We propose to employ the Laplace approximation of the posterior as the base measure for numerical integration in this context. The Laplace approximation is a Gaussian measure centered at the maximum a-posteriori estimate (MAPE) and with covariance matrix depending on the Hessian of the log posterior density at the MAPE. We discuss convergence results of the Laplace approximation in terms of the Hellinger distance and analyze the efficiency of Monte Carlo methods based on it. In particular, we show that Laplace-based importance sampling and quasi-Monte-Carlo as well as Laplace-based Metropolis-Hastings algorithms are robust w.r.t. the concentration of the posterior for large classes of posterior distributions and integrands whereas prior-based Monte Carlo sampling methods are not.