Header menu logo bristlecone

MonteCarlo Module

A module containing Monte Carlo Markov Chain (MCMC) methods for optimisation. An introduction to MCMC approaches is provided by [Reali, Priami, and Marchetti (2017)](https://doi.org/10.3389/fams.2017.00006)

Types and nested modules

Type/Module Description

Filzbach

An adaptation of the Filzbach method (originally by Drew Purves)

MetropolisWithinGibbs

RandomWalk

SimulatedAnnealing

A meta-heuristic that approximates a global optimium by simulating slow cooling as a slow decrease in the probability of temporarily accepting worse solutions.

TuningMode

TuneMethod

TuneStep

jump-scale

Functions and values

Function or value Description

``Adaptive-Metropolis-withinGibbs``

Full Usage: ``Adaptive-Metropolis-withinGibbs``

Returns: Optimiser

An adaptive Metropolis-within-Gibbs sampler that tunes the variance of each parameter according to the per-parameter acceptance rate. Reference: Bai Y (2009). “An Adaptive Directional Metropolis-within-Gibbs Algorithm.” Technical Report in Department of Statistics at the University of Toronto.

Returns: Optimiser

``Automatic(AdaptiveDiagnostics)``

Full Usage: ``Automatic(AdaptiveDiagnostics)``

Returns: Optimiser

Implementation similar to that proposed by Yang and Rosenthal: "Automatically Tuned General-Purpose MCMC via New Adaptive Diagnostics" Reference: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.7198&rep=rep1&type=pdf

Returns: Optimiser

``Metropolis-withinGibbs``

Full Usage: ``Metropolis-withinGibbs``

Returns: Optimiser

A non-adaptive Metropolis-within-gibbs Sampler. Each parameter is updated individually, unlike the random walk algorithm.

Returns: Optimiser

adaptiveMetropolis weighting period

Full Usage: adaptiveMetropolis weighting period

Parameters:
Returns: Optimiser

A Markov Chain Monte Carlo (MCMC) sampling algorithm that continually adjusts the covariance matrix based on the recently-sampled posterior distribution. Proposed jumps are therefore tuned to the recent history of accepted jumps.

weighting : float
period : int<MeasureProduct<iteration, MeasureOne>>
Returns: Optimiser

constrainJump initial jump scaleFactor c

Full Usage: constrainJump initial jump scaleFactor c

Parameters:
Returns: float<MeasureProduct<optim-space, MeasureOne>>

Jump in parameter space while reflecting constraints.

initial : float<MeasureProduct<optim-space, MeasureOne>>
jump : float<MeasureProduct<optim-space, MeasureOne>>
scaleFactor : float
c : Constraint
Returns: float<MeasureProduct<optim-space, MeasureOne>>

metropolisHastings' random writeOut endCondition propose tune f theta1 l1 d scale iteration

Full Usage: metropolisHastings' random writeOut endCondition propose tune f theta1 l1 d scale iteration

Parameters:
Returns: Solution list * 'a `(float * 'a) list * 'b` - A tuple containing a list of results, and the final scale used in the analysis. The `(float * 'a) list` represents a list of paired -log likelihood values with the proposed theta.

A recursive metropolis hastings algorithm that ends when `endCondition` returns true.

random : Random

`System.Random` to be used for drawing from a uniform distribution.

writeOut : LogEvent -> unit

side-effect function for handling `LogEvent` items.

endCondition : Solution list -> int<MeasureProduct<iteration, MeasureOne>> -> OptimStopReason

`EndCondition` that dictates when the MH algorithm ends.

propose : 'a -> Point -> float<MeasureProduct<optim-space, MeasureOne>>[]

proposal `'scale -> 'theta -> 'theta` that generates a jump based on the scale value.

tune : int<MeasureProduct<iteration, MeasureOne>> -> Solution list -> 'a -> 'a

parameter of type `int -> (float * 'a) list -> 'b -> 'b`, where `int` is current iteration,

f : TypedTensor<Vector, MeasureProduct<optim-space, MeasureOne>> -> TypedTensor<Scalar, MeasureProduct<-logL, MeasureOne>>

an objective function, `'a -> float`, to optimise.

theta1 : Point

initial position in parameter space of type `'a`.

l1 : TypedTensor<Scalar, MeasureProduct<-logL, MeasureOne>>

initial value of -log likelihood at theta1 in parameter space

d : Solution list

history of the chain, of type `(float * 'a) list`. Passing a list here allows continuation of a previous analysis.

scale : 'a

a scale of type `'b`, which is compatible with the scale tuning function `tune`

iteration : int<MeasureProduct<iteration, MeasureOne>>

the current iteration number

Returns: Solution list * 'a

`(float * 'a) list * 'b` - A tuple containing a list of results, and the final scale used in the analysis. The `(float * 'a) list` represents a list of paired -log likelihood values with the proposed theta.

randomWalk tuningSteps

Full Usage: randomWalk tuningSteps

Parameters:
Returns: Optimiser

A Markov Chain Monte Carlo (MCMC) sampling algorithm that randomly 'walks' through a n-dimensional posterior distribution of the parameter space. Specify `tuningSteps` to prime the jump size before random walk.

tuningSteps : TuneStep seq
Returns: Optimiser

Type something to start searching.