MUQ  0.4.3
Markov chain Monte Carlo

Markov chain Monte Carlo (MCMC)

MCMC algorithms construct ergodic Markov chains \(\{x^{(1)}, \ldots, x^{(N)}\}\) that can be used as samples in Monte Carlo approximations, like

\[ \int_\Omega h(x) p(x) dx \approx \frac{1}{N} h\left(x^{(i)}\right) = \hat{h}. \]

The Markov chain is defined in terms of a transition kernel \(K_n(x^{(n+1)} | x^{(n)})\), which is a probability distribution over the next state in the chain \(x^{(n+1)}\) given the current state in the chain \(x^{(n)}\). There are many different ways of constructing the transition kernel; one of the most common approaches is to use the Metropolis-Hastings (MH) rule. The MH rule constructs the kernel using another proposal distribution \(q(x^\prime | x^{(n)})\) and evaluations of the target density \(p(x)\). In particular, the transition kernel is defined by the following process:

  1. Draw a random sample \(x^\prime \sim q(x^\prime | x^{(n)})\) from the proposal distribution.
  2. Compute the acceptance probability

    \[ \alpha = \min\left\{1,\quad \frac{p(x^\prime) q(x^{(n)} | x^\prime)}{p(x^{(n)}) q(x^\prime | x^{(n)})} \right\} \]

  3. Set \(x^{(n+1)} = x^\prime\) with probability \(\alpha\). Else, \(x^{(n+1)} = x^{(n)}\).

MUQ structures it's sampling module to mimic the components needed by the Metropolis-Hastings rule. The SamplingProblem interfaces with the target distribution \(p(x)\) and (optionally) the quantity of interest \(h(x)\). Typically, \(p(x)\) and \(h(x)\) are defined as ModPieces. (See the Modeling chapter for more details on constructing ModPieces.) Once the SamplingProblem is defined, an MCMC algorithm is constructed by combining an instance of the TransitionKernel class with one or more instances of the MCMCProposal class.

Quickstart

Here's an example of constructing and running an MCMC sampler in MUQ. This example assumes that a ModPiece called tgtDens has been defined to compute \(\log p(x)\) and that a vector x0 has been defined for the initial state \(x^{(0)}\) of the chain.

C++
Python
// Construct the sampling problem from the target density
auto problem = std::make_shared<SamplingProblem>(tgtDens);

// Define proposal
boost::property_tree::ptree propOpts;
propOpts.put("ProposalVariance", 1.0);
auto prop = std::make_shared<MHProposal>(propOpts, problem);

// Use the proposal to construct a Metropolis-Hastings kernel
boost::property_tree::ptree kernOpts;
auto kern = std::make_shared<MHKernel>(kernOpts, problem, prop);

// Construct the MCMC sampler using this transition kernel
boost::property_tree::ptree chainOpts;
chainOpts.put("NumSamples", 2000);
chainOpts.put("BurnIn", 10);
chainOpts.put("PrintLevel", 3);
auto sampler = std::make_shared<SingleChainMCMC>(chainOpts, kern);

// Run the MCMC algorithm to generate samples. x0 is the starting location
auto samps = sampler.Run(x0);

Details on defining MCMC algorithms can be found here.

Complete introductory examples can be found in python and in c++.

Modules

 Getting Started: Defining an MCMC Algorithm
 
 Dimension-Independent MCMC
 
 Markov Chain Diagnostics
 
 Multi-Index MCMC
 Tools for defininig and running multilevel and multiindex MCMC algorithms.
 
 MCMC Proposal Distributions
 
 MCMC Kernels
 Transition kernels used in MCMC algorithms.
 

Classes

class  muq::SamplingAlgorithms::SampleEstimator
 Abstract base class for computing sample-based approximations of expectations. More...