bristlecone


Optimisation Methods

Bristlecone includes a suite of in-built local and global optimisation methods. You can also use a custom optimisation method defined in another library, or combine these with the in-built optimisation techniques.

Defining a custom optimisation method

A custom optimisation procedure can be used with Bristlecone by plugging in as follows:

open Bristlecone

let myCustomOptimiser : EstimationEngine.Optimise<float> =
    fun writeOut n domain f -> invalidOp "Doesn't actually do anything!"

let engine =
    Bristlecone.mkContinuous
    |> Bristlecone.withCustomOptimisation myCustomOptimiser

Included optimisation methods

The library itself includes Monte Carlo-based methods, and Amoeba (Nelder Mead Simplex) methods. All methods are included in this namespace:

open Bristlecone.Optimisation

Simulated Annealing (SA)

Simulated annealing models a minimisation problem in terms of particle entropy, where the function value is analogous to energy. At high temperatures, particles are in a high-energy state, thus can move readily. As temperature gradually decreases, particles are less able to move to high energy states, with particles eventually arranging into the ‘ground state’ of a solid material.

The SA implementation in Bristlecone includes classical SA that uses a Boltzmann machine for jumps, and fast simulated annealing (FSA) that uses a Cauchy distribution for jumps:

let settings = MonteCarlo.SimulatedAnnealing.AnnealSettings<float>.Default

let classicalSA : EstimationEngine.Optimise<float> = 
    MonteCarlo.SimulatedAnnealing.classicalSimulatedAnnealing 0.01 false settings

let fastSA : EstimationEngine.Optimise<float> = 
    MonteCarlo.SimulatedAnnealing.fastSimulatedAnnealing 0.01 false settings

The FSA approach enables greater exploration of more distant portions of parameter space more quickly and is therefore less prone to local minima problems.

For the SA implementation in Bristlecone, you must specify whether you wish to use a temperature-dependent or temperature-independent proposal. When temperature-dependent, the jump size increases at higher temperatures, allowing coarser exploration of the parameter space initially, progressively becoming more refined as the system cools.

Nelder-Mead Methods

The Nelder-Mead simplex (or amoeba method) expands and contracts an n-dimensional geometry within an unconstrained parameter space, following a ‘downhill’ gradient. The implementation here draws a random point from a normal distribution, informed by the starting bounds for each parameter.

The Nelder-Mead simplex is most appropriate for simpler likelihood surfaces. It can be used as follows:

let settingsNM = Amoeba.Solver.Default

let single : EstimationEngine.Optimise<float> = 
    Amoeba.Solver.solve settingsNM

A single Nelder-Mead solver is highly subject to local minima. To reduce the prevalence of local minima, Bristlecone includes a swarm implementation of the Nelder-Mead simplex, which, creates an ensemble of 20 amoeba, all of which crawl the likelihood surface. After n iterations, all amoeba that have values above the 80th percentile of the negative log likelihood values are discarded. The procedure continues for five levels. The swarm mode can be used from:

let levels = 5
let individuals = 20

let swarmMode : EstimationEngine.Optimise<float> = 
    Amoeba.swarm levels individuals settingsNM

Monte Carlo methods

Monte Carlo Markov Chain (MCMC) methods are most often applied in Bayesian analyses, but can also be used for optimisation to identify the -log likelihood. Within Bristlecone, a number of variations of Monte Carlo methods are included.

Often, it is appropriate to use multiple methods in combination; initially, tuning algorithm(s) are used to enhance the performance of the search. Once the tuning phase is complete, a search is conducted to identify the minimum; finally, a sampling algorithm may be run to explore the distribution around the minimum.

Random Walk Metropolis Hastings

The random walk algorithm may optionally be passed a list of TuningSteps, which will be run before the final random walk.

// Random walk with no tuning steps
let randomWalk : EstimationEngine.Optimise<float> = 
    MonteCarlo.randomWalk []

// Random walk with 50,000 iterations of tuning, during
// which the individual parameter jump sizes are scaled
// every 500 iterations.
let randomWalkWithTuning : EstimationEngine.Optimise<float> =
    [ MonteCarlo.TuneMethod.Scale, 500, EndConditions.afterIteration 50000 ]
    |> MonteCarlo.randomWalk

Adaptive Metropolis (AM)

The AM algorithm continuously tunes the covariance structure of the jump distribution based on the size of jumps in the recently-explored parameter space.

let weighting = 0.250 // Weight to give to recent history versus existing covariance structure
let frequency = 200 // Tune the covariance structure every 200 iterations

let am : EstimationEngine.Optimise<float> = 
    MonteCarlo.adaptiveMetropolis weighting frequency

Filzbach-style Monte Carlo

The Filzbach algorithm was introduced by Drew Purves, Microsoft Research Cambridge. The Filzbach algorithm was designed for problems in ecological research. It contains a burn-in phase and a sampling phase. Four settings are required: the length of the burn-in phase (which can be any Bristlecone EndCondition); the minimum and maximum scale changes that can occur within the tuning phase, based on the ranges given for each parameter in the model definition; and the number of changes after which to conduct parameter tuning.

open Bristlecone.Optimisation.MonteCarlo.Filzbach

let settingsFB = { TuneAfterChanges = 20
                   MaxScaleChange = 100.
                   MinScaleChange = 0.01
                   BurnLength = EndConditions.afterIteration 100000 }

let optim : EstimationEngine.Optimise<float> = 
    filzbach settingsFB

'Automatic' optimisation

Implementation similar to that proposed by Yang and Rosenthal: "Automatically Tuned General-Purpose MCMC via New Adaptive Diagnostics" Reference

let auto = MonteCarlo.``Automatic (Adaptive Diagnostics)``

Adaptive-Metropolis-within Gibbs

An adaptive Metropolis-within-Gibbs sampler that tunes the variance of each parameter according to the per-parameter acceptance rate. Reference: Bai Y (2009). “An Adaptive Directional Metropolis-within-Gibbs Algorithm.” Technical Report in Department of Statistics at the University of Toronto.

let amwg = MonteCarlo.``Adaptive-Metropolis-within Gibbs``

Metropolis-within Gibbs

A non-adaptive Metropolis-within-gibbs Sampler. Each parameter is updated individually, unlike the random walk algorithm.

let mwg = MonteCarlo.``Metropolis-within Gibbs``
namespace Bristlecone
val myCustomOptimiser : writeOut:System.Random -> n:EstimationEngine.WriteOut -> domain:EstimationEngine.EndCondition<float> -> f:EstimationEngine.Domain -> ((float [] -> float) -> EstimationEngine.Solution<float> list)
module EstimationEngine from Bristlecone
type Optimise<'data> = System.Random -> EstimationEngine.WriteOut -> EstimationEngine.EndCondition<'data> -> EstimationEngine.Domain -> ('data [] -> 'data) -> EstimationEngine.Solution<'data> list
Multiple items
val float : value:'T -> float (requires member op_Explicit)
<summary>Converts the argument to 64-bit float. This is a direct conversion for all primitive numeric types. For strings, the input is converted using <c>Double.Parse()</c> with InvariantCulture settings. Otherwise the operation requires an appropriate static conversion method on the input type.</summary>
<param name="value">The input value.</param>
<returns>The converted float</returns>


--------------------
[<Struct>] type float = System.Double
<summary>An abbreviation for the CLI type <see cref="T:System.Double" />.</summary>
<category>Basic Types</category>


--------------------
type float<'Measure> = float
<summary>The type of double-precision floating point numbers, annotated with a unit of measure. The unit of measure is erased in compiled code and when values of this type are analyzed using reflection. The type is representationally equivalent to <see cref="T:System.Double" />.</summary>
<category index="6">Basic Types with Units of Measure</category>
val writeOut : System.Random
val n : EstimationEngine.WriteOut
val domain : EstimationEngine.EndCondition<float>
val f : EstimationEngine.Domain
val invalidOp : message:string -> 'T
<summary>Throw a <see cref="T:System.InvalidOperationException" /> exception</summary>
<param name="message">The exception message.</param>
<returns>The result value.</returns>
val engine : EstimationEngine.EstimationEngine<float,float>
Multiple items
module Bristlecone from Bristlecone

--------------------
namespace Bristlecone
val mkContinuous : EstimationEngine.EstimationEngine<float,float>
<summary> A standard `EstimationEngine` for ordinary differential equation models. </summary>
val withCustomOptimisation : optim:EstimationEngine.Optimise<'a> -> engine:EstimationEngine.EstimationEngine<'a,'b> -> EstimationEngine.EstimationEngine<'a,'b>
namespace Bristlecone.Optimisation
val settings : MonteCarlo.SimulatedAnnealing.AnnealSettings<float>
module MonteCarlo from Bristlecone.Optimisation
<summary> A module containing Monte Carlo Markov Chain (MCMC) methods for optimisation. An introduction to MCMC approaches is provided by [Reali, Priami, and Marchetti (2017)](https://doi.org/10.3389/fams.2017.00006) </summary>
module SimulatedAnnealing from Bristlecone.Optimisation.MonteCarlo
<summary> A meta-heuristic that approximates a global optimium by simulating slow cooling as a slow decrease in the probability of temporarily accepting worse solutions. </summary>
type AnnealSettings<'a> = { HeatStepLength: EndCondition<'a> HeatRamp: float -> float TemperatureCeiling: float option BoilingAcceptanceRate: float InitialTemperature: float PreTuneLength: int TuneLength: int TuneN: int AnnealStepLength: EndCondition<'a> } static member Default : AnnealSettings<float>
<summary> Represents configurable settings of an annealing procedure that supports (a) heating, followed by (b) annealing. </summary>
val classicalSA : EstimationEngine.Optimise<float>
val classicalSimulatedAnnealing : scale:float -> tDependentProposal:bool -> settings:MonteCarlo.SimulatedAnnealing.AnnealSettings<float> -> random:System.Random -> writeOut:EstimationEngine.WriteOut -> endCon:EstimationEngine.EndCondition<float> -> domain:EstimationEngine.Domain -> f:(float [] -> float) -> EstimationEngine.Solution<float> list
<summary> Candidate distribution: Gaussian univariate [] Probability: Boltzmann Machine </summary>
val fastSA : EstimationEngine.Optimise<float>
val fastSimulatedAnnealing : scale:float -> tDependentProposal:bool -> settings:MonteCarlo.SimulatedAnnealing.AnnealSettings<float> -> random:System.Random -> writeOut:EstimationEngine.WriteOut -> endCon:EstimationEngine.EndCondition<float> -> domain:EstimationEngine.Domain -> f:(float [] -> float) -> EstimationEngine.Solution<float> list
<summary> Candidate distribution: Cauchy univariate [] Probability: Bottzmann Machine </summary>
val settingsNM : Amoeba.Solver.Settings
module Amoeba from Bristlecone.Optimisation
<summary> Nelder Mead implementation Adapted from original at: https://github.com/mathias-brandewinder/Amoeba </summary>
module Solver from Bristlecone.Optimisation.Amoeba
val Default : Amoeba.Solver.Settings
Multiple items
val single : EstimationEngine.Optimise<float>

--------------------
[<Struct>] type single = System.Single
<summary>An abbreviation for the CLI type <see cref="T:System.Single" />. Identical to <see cref="T:Microsoft.FSharp.Core.float32" />.</summary>
<category>Basic Types</category>


--------------------
type single<'Measure> = float32<'Measure>
<summary>The type of single-precision floating point numbers, annotated with a unit of measure. The unit of measure is erased in compiled code and when values of this type are analyzed using reflection. The type is representationally equivalent to <see cref="T:System.Single" />.</summary>
<category index="6">Basic Types with Units of Measure</category>
val solve : settings:Amoeba.Solver.Settings -> rng:System.Random -> logger:(Logging.LogEvent -> unit) -> endWhen:EstimationEngine.EndCondition<float> -> domain:(float * float * Parameter.Constraint) [] -> f:EstimationEngine.Objective<float> -> EstimationEngine.Solution<float> list
val levels : int
val individuals : int
val swarmMode : EstimationEngine.Optimise<float>
val swarm : levels:int -> amoebaAtLevel:int -> settings:Amoeba.Solver.Settings -> rng:System.Random -> logger:EstimationEngine.WriteOut -> endAt:EstimationEngine.EndCondition<float> -> domain:EstimationEngine.Domain -> f:(float [] -> float) -> EstimationEngine.Solution<float> list
<summary> Optimisation heuristic that creates a swarm of amoeba (Nelder-Mead) solvers. The swarm proceeds for `numberOfLevels` levels, constraining the starting bounds at each level to the 80th percentile of the current set of best likelihoods. </summary>
val randomWalk : EstimationEngine.Optimise<float>
val randomWalk : tuningSteps:seq<MonteCarlo.TuneStep<float>> -> random:System.Random -> writeOut:EstimationEngine.WriteOut -> n:EstimationEngine.EndCondition<float> -> domain:EstimationEngine.Domain -> f:(float [] -> float) -> EstimationEngine.Solution<float> list
<summary> A Markov Chain Monte Carlo (MCMC) sampling algorithm that randomly 'walks' through a n-dimensional posterior distribution of the parameter space. Specify `tuningSteps` to prime the jump size before random walk. </summary>
val randomWalkWithTuning : EstimationEngine.Optimise<float>
type TuneMethod = | Covariance of float | Scale | CovarianceWithScale of float | CovarianceWithScaleTotalHistory of float
union case MonteCarlo.TuneMethod.Scale: MonteCarlo.TuneMethod
module EndConditions from Bristlecone.Optimisation
val afterIteration : iteration:int -> results:EstimationEngine.Solution<float> list -> bool
<summary> End the optimisation procedure when a minimum number of iterations is exceeded. </summary>
val weighting : float
val frequency : int
val am : EstimationEngine.Optimise<float>
val adaptiveMetropolis : weighting:float -> period:MonteCarlo.Frequency -> random:System.Random -> writeOut:EstimationEngine.WriteOut -> n:EstimationEngine.EndCondition<float> -> domain:EstimationEngine.Domain -> f:(float [] -> float) -> EstimationEngine.Solution<float> list
<summary> A Markov Chain Monte Carlo (MCMC) sampling algorithm that continually adjusts the covariance matrix based on the recently-sampled posterior distribution. Proposed jumps are therefore tuned to the recent history of accepted jumps. </summary>
module Filzbach from Bristlecone.Optimisation.MonteCarlo
<summary> An adaptation of the Filzbach method (originally by Drew Purves) </summary>
val settingsFB : FilzbachSettings<float>
val optim : EstimationEngine.Optimise<float>
val filzbach : settings:FilzbachSettings<float> -> random:System.Random -> writeOut:EstimationEngine.WriteOut -> endCon:EstimationEngine.EndCondition<float> -> domain:EstimationEngine.Domain -> f:(float [] -> float) -> EstimationEngine.Solution<float> list
<summary> A Monte Carlo Markov Chain sampler based on the 'Filzbach' algorithm from Microsoft Research Cambridge. </summary>
val auto : (System.Random -> EstimationEngine.WriteOut -> EstimationEngine.EndCondition<float> -> EstimationEngine.Domain -> (float [] -> float) -> EstimationEngine.Solution<float> list)
val ( Automatic (Adaptive Diagnostics) ) : random:System.Random -> writeOut:EstimationEngine.WriteOut -> endCon:EstimationEngine.EndCondition<float> -> domain:EstimationEngine.Domain -> f:(float [] -> float) -> EstimationEngine.Solution<float> list
<summary> Implementation similar to that proposed by Yang and Rosenthal: "Automatically Tuned General-Purpose MCMC via New Adaptive Diagnostics" Reference: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.7198&amp;rep=rep1&amp;type=pdf </summary>
val amwg : (System.Random -> EstimationEngine.WriteOut -> EstimationEngine.EndCondition<float> -> EstimationEngine.Domain -> (float [] -> float) -> EstimationEngine.Solution<float> list)
val ( Adaptive-Metropolis-within Gibbs ) : random:System.Random -> writeOut:EstimationEngine.WriteOut -> endCon:EstimationEngine.EndCondition<float> -> domain:EstimationEngine.Domain -> f:(float [] -> float) -> EstimationEngine.Solution<float> list
<summary> An adaptive Metropolis-within-Gibbs sampler that tunes the variance of each parameter according to the per-parameter acceptance rate. Reference: Bai Y (2009). “An Adaptive Directional Metropolis-within-Gibbs Algorithm.” Technical Report in Department of Statistics at the University of Toronto. </summary>
val mwg : (System.Random -> EstimationEngine.WriteOut -> EstimationEngine.EndCondition<float> -> EstimationEngine.Domain -> (float [] -> float) -> EstimationEngine.Solution<float> list)
val ( Metropolis-within Gibbs ) : random:System.Random -> writeOut:EstimationEngine.WriteOut -> endCon:EstimationEngine.EndCondition<float> -> domain:EstimationEngine.Domain -> f:(float [] -> float) -> EstimationEngine.Solution<float> list
<summary> A non-adaptive Metropolis-within-gibbs Sampler. Each parameter is updated individually, unlike the random walk algorithm. </summary>