A composable Markov Chain Monte Carlo (MCMC) framework for arbitrary state spaces in Rust.
🚧 Pre-release (0.x) — This crate is under active development. APIs may change between minor versions.
See CHANGELOG.md for release history.
This crate provides:
- A generic Metropolis–Hastings implementation
- Two proposal models:
Proposal<S>— clone-based, for simple/small state spacesProposalMut<S>— in-place mutation with rollback, for large combinatorial state spaces (triangulations, graphs) where cloning is expensive
Chain<S>withstep(clone-based) andstep_mut(in-place) methodsSampler<S, T, P, R>— ergonomic wrapper that bundles a chain with its target, proposal, and RNG; supportsrun(n)/run_mut(n)for bulk sampling and implementsIterator- NaN and +∞ detection with automatic state rollback on error
- Chain statistics:
acceptance_rate(),total_steps(),reset_counters() - Seeded RNG support for reproducible simulations
The design emphasizes:
- Zero-cost abstractions
- Log-space numerical stability
- Extensibility for research and experimentation
use markov_chain_monte_carlo::prelude::*;
use rand::{Rng, RngExt, SeedableRng, rngs::StdRng};
#[derive(Clone)]
struct Scalar(f64);
struct Normal;
impl Target<Scalar> for Normal {
fn log_prob(&self, state: &Scalar) -> f64 {
-0.5 * state.0 * state.0
}
}
struct RandomWalk { width: f64 }
impl Proposal<Scalar> for RandomWalk {
fn propose<R: Rng + ?Sized>(&self, current: &Scalar, rng: &mut R) -> Scalar {
Scalar(current.0 + rng.random_range(-self.width..self.width))
}
}
fn main() -> Result<(), McmcError> {
let mut rng = StdRng::seed_from_u64(42);
let mut chain = Chain::new(Scalar(0.0), &Normal)?;
for _ in 0..1000 {
chain.step(&Normal, &RandomWalk { width: 1.0 }, &mut rng)?;
}
assert!(chain.acceptance_rate() > 0.2);
Ok(())
}use markov_chain_monte_carlo::prelude::*;
use rand::{Rng, RngExt, SeedableRng, rngs::StdRng};
# #[derive(Clone)] struct Scalar(f64);
# struct Normal;
# impl Target<Scalar> for Normal {
# fn log_prob(&self, s: &Scalar) -> f64 { -0.5 * s.0 * s.0 }
# }
# struct RandomWalk;
# impl Proposal<Scalar> for RandomWalk {
# fn propose<R: Rng + ?Sized>(&self, c: &Scalar, r: &mut R) -> Scalar {
# Scalar(c.0 + r.random_range(-1.0..1.0))
# }
# }
fn main() -> Result<(), McmcError> {
let mut rng = StdRng::seed_from_u64(42);
let chain = Chain::new(Scalar(0.0), &Normal)?;
let mut sampler = Sampler::new(chain, &Normal, &RandomWalk, &mut rng);
// Burn-in
sampler.run(1000)?;
sampler.chain.reset_counters();
// Production
sampler.run(10_000)?;
assert!(sampler.chain.acceptance_rate() > 0.0);
Ok(())
}use markov_chain_monte_carlo::prelude::*;
use rand::{Rng, RngExt, SeedableRng, rngs::StdRng};
struct SpinChain { spins: Vec<i8> } // not Clone
struct Ising;
impl Target<SpinChain> for Ising {
fn log_prob(&self, state: &SpinChain) -> f64 {
state.spins.windows(2)
.map(|w| f64::from(w[0]) * f64::from(w[1]))
.sum()
}
}
struct SpinFlip;
impl ProposalMut<SpinChain> for SpinFlip {
type Undo = usize; // which site was flipped
fn propose_mut<R: Rng + ?Sized>(&self, state: &mut SpinChain, rng: &mut R) -> Option<usize> {
if state.spins.is_empty() { return None; }
let idx = rng.random_range(0..state.spins.len());
state.spins[idx] *= -1;
Some(idx)
}
fn undo(&self, state: &mut SpinChain, idx: usize) {
state.spins[idx] *= -1;
}
}
fn main() -> Result<(), McmcError> {
let mut rng = StdRng::seed_from_u64(42);
let mut chain = Chain::new(SpinChain { spins: vec![1; 20] }, &Ising)?;
for _ in 0..1000 {
chain.step_mut(&Ising, &SpinFlip, &mut rng)?;
}
assert!(chain.acceptance_rate() > 0.0);
Ok(())
}This crate is part of a broader ecosystem:
causal-triangulations— CDT physics and simulationdelaunay— geometric primitivesla-stack— linear algebra
The long-term architecture separates:
- Geometry (triangulations)
- Sampling (this crate)
- Physics (CDT, actions, observables)
- Adaptive Metropolis–Hastings
- Simulated annealing / tempering
- Parallel chains
- Diagnostics (ESS, autocorrelation)
- Learned proposals (ML integration)
-
serdefeature for chain checkpointing