Stein Variational Reduced Basis Bayesian Inversion

نویسندگان

چکیده

We propose and analyze a Stein variational reduced basis method (SVRB) to solve large-scale PDE-constrained Bayesian inverse problems. To address the computational challenge of drawing numerous samples requiring expensive PDE solves from posterior distribution, we integrate an adaptive goal-oriented model reduction technique with optimization-based gradient descent method. present detailed analyses for approximation errors potential its gradient, induced distribution measured by Kullback--Leibler divergence, as well samples. demonstrate accuracy efficiency SVRB, report results numerical experiments on problem governed diffusion random parameters both uniform Gaussian prior distributions. Over 100X speedups can be achieved while is preserved.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse-Grid, Reduced-Basis Bayesian Inversion

We analyze reduced basis acceleration of recently proposed deterministic Bayesian inversion algorithms for partial differential equations with uncertain distributed parameter, for observation data subject to additive, Gaussian observation noise. Specifically, Bayesian inversion of affine-parametric, linear operator families on possibly high-dimensional parameter spaces. We consider “high-fideli...

متن کامل

Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm

We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method...

متن کامل

Stein Variational Policy Gradient

Policy gradient methods have been successfully applied to many complex reinforcement learning problems. However, policy gradient methods suffer from high variance, slow convergence, and inefficient exploration. In this work, we introduce a maximum entropy policy optimization framework which explicitly encourages parameter exploration, and show that this framework can be reduced to a Bayesian in...

متن کامل

Stein Variational Autoencoder

A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupe...

متن کامل

Variational Inference with Stein Mixtures

Obtaining uncertainty estimates is increasingly important in modern machine learning, especially as models are given an increasing amount of responsibility. Yet, as the tasks undertaken by automation become more complex, so do the models and accompanying inference strategies. In fact, exact inference is often impossible in practice for modern probabilistic models. Thus, performing variational i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Scientific Computing

سال: 2021

ISSN: ['1095-7197', '1064-8275']

DOI: https://doi.org/10.1137/20m1321589