Header menu logo bristlecone

ScriptNotebook

Optimisation Methods

Bristlecone includes a suite of in-built local and global optimisation methods. You can also use a custom optimisation method defined in another library, or combine these with the in-built optimisation techniques.

Defining a custom optimisation method

A custom optimisation procedure can be used with Bristlecone by plugging in as follows:

open Bristlecone

let myCustomOptimiser: EstimationEngine.Optimiser<float> =
    EstimationEngine.InDetachedSpace
    <| fun writeOut n domain f -> invalidOp "Doesn't actually do anything!"

let engine =
    Bristlecone.mkContinuous |> Bristlecone.withCustomOptimisation myCustomOptimiser

Included optimisation methods

The library itself includes Monte Carlo-based methods, and Amoeba (Nelder Mead Simplex) methods. All methods are included in this namespace:

open Bristlecone.Optimisation

Simulated Annealing (SA)

Simulated annealing models a minimisation problem in terms of particle entropy, where the function value is analogous to energy. At high temperatures, particles are in a high-energy state, thus can move readily. As temperature gradually decreases, particles are less able to move to high energy states, with particles eventually arranging into the ‘ground state’ of a solid material.

The SA implementation in Bristlecone includes classical SA that uses a Boltzmann machine for jumps, and fast simulated annealing (FSA) that uses a Cauchy distribution for jumps:

let settings = MonteCarlo.SimulatedAnnealing.AnnealSettings<float>.Default

// Classical Simulated Annealing:
MonteCarlo.SimulatedAnnealing.classicalSimulatedAnnealing 0.01 false settings

// Fast Simulated Annealing:
MonteCarlo.SimulatedAnnealing.fastSimulatedAnnealing 0.01 false settings

The FSA approach enables greater exploration of more distant portions of parameter space more quickly and is therefore less prone to local minima problems.

For the SA implementation in Bristlecone, you must specify whether you wish to use a temperature-dependent or temperature-independent proposal. When temperature-dependent, the jump size increases at higher temperatures, allowing coarser exploration of the parameter space initially, progressively becoming more refined as the system cools.

Nelder-Mead Methods

The Nelder-Mead simplex (or amoeba method) expands and contracts an n-dimensional geometry within an unconstrained parameter space, following a ‘downhill’ gradient. The implementation here draws a random point from a normal distribution, informed by the starting bounds for each parameter.

The Nelder-Mead simplex is most appropriate for simpler likelihood surfaces. It can be used as follows:

let settingsNM = Amoeba.Solver.Default
{ Alpha = 1.0
  Sigma = 0.5
  Gamma = 2.0
  Rho = -0.5
  Size = 3 }
let single: EstimationEngine.Optimise<float> = Amoeba.Solver.solve settingsNM

A single Nelder-Mead solver is highly subject to local minima. To reduce the prevalence of local minima, Bristlecone includes a swarm implementation of the Nelder-Mead simplex, which, creates an ensemble of 20 amoeba, all of which crawl the likelihood surface. After n iterations, all amoeba that have values above the 80th percentile of the negative log likelihood values are discarded. The procedure continues for five levels. The swarm mode can be used from:

let levels = 5
let individuals = 20

Amoeba.swarm levels individuals settingsNM

Monte Carlo methods

Monte Carlo Markov Chain (MCMC) methods are most often applied in Bayesian analyses, but can also be used for optimisation to identify the -log likelihood. Within Bristlecone, a number of variations of Monte Carlo methods are included.

Often, it is appropriate to use multiple methods in combination; initially, tuning algorithm(s) are used to enhance the performance of the search. Once the tuning phase is complete, a search is conducted to identify the minimum; finally, a sampling algorithm may be run to explore the distribution around the minimum.

Random Walk Metropolis Hastings

The random walk algorithm may optionally be passed a list of TuningSteps, which will be run before the final random walk.

// Random walk with no tuning steps
MonteCarlo.randomWalk []

// Random walk with 50,000 iterations of tuning, during
// which the individual parameter jump sizes are scaled
// every 500 iterations.
[ MonteCarlo.TuneMethod.Scale, 500, EndConditions.afterIteration 50000 ]
|> MonteCarlo.randomWalk

Adaptive Metropolis (AM)

The AM algorithm continuously tunes the covariance structure of the jump distribution based on the size of jumps in the recently-explored parameter space.

let weighting = 0.250 // Weight to give to recent history versus existing covariance structure
let frequency = 200 // Tune the covariance structure every 200 iterations

MonteCarlo.adaptiveMetropolis weighting frequency

Filzbach-style Monte Carlo

The Filzbach algorithm was introduced by Drew Purves, Microsoft Research Cambridge. The Filzbach algorithm was designed for problems in ecological research. It contains a burn-in phase and a sampling phase. Four settings are required: the length of the burn-in phase (which can be any Bristlecone EndCondition); the minimum and maximum scale changes that can occur within the tuning phase, based on the ranges given for each parameter in the model definition; and the number of changes after which to conduct parameter tuning.

open Bristlecone.Optimisation.MonteCarlo.Filzbach

let settingsFB =
    { TuneAfterChanges = 20
      MaxScaleChange = 100.
      MinScaleChange = 0.01
      BurnLength = EndConditions.afterIteration 100000 }

filzbach settingsFB

'Automatic' optimisation

Implementation similar to that proposed by Yang and Rosenthal: "Automatically Tuned General-Purpose MCMC via New Adaptive Diagnostics" Reference

MonteCarlo.``Automatic (Adaptive Diagnostics)``

Adaptive-Metropolis-within Gibbs

An adaptive Metropolis-within-Gibbs sampler that tunes the variance of each parameter according to the per-parameter acceptance rate. Reference: Bai Y (2009). “An Adaptive Directional Metropolis-within-Gibbs Algorithm.” Technical Report in Department of Statistics at the University of Toronto.

MonteCarlo.``Adaptive-Metropolis-within Gibbs``

Metropolis-within Gibbs

A non-adaptive Metropolis-within-gibbs Sampler. Each parameter is updated individually, unlike the random walk algorithm.

MonteCarlo.``Metropolis-within Gibbs``
Multiple items
module Bristlecone from Bristlecone
<namespacedoc><summary>The core library of Bristlecone, containing model-fitting functions.</summary></namespacedoc>
Main functionality of Bristlecone, including functions to scaffold `ModelSystem`s and for model-fitting (tests and real fits).


--------------------
namespace Bristlecone
val myCustomOptimiser: EstimationEngine.Optimiser<float>
module EstimationEngine from Bristlecone
type Optimiser<'data> = | InTransformedSpace of Optimise<'data> | InDetachedSpace of Optimise<'data>
<summary> An `Optimiser` is an optimisation algorithm that may work either in 'transformed' parameter space (where parameter constraints are automatically handled) or in 'detatched' space (where the optimisation algorithm is responsible for respecting parameter constraints). </summary>
Multiple items
val float: value: 'T -> float (requires member op_Explicit)

--------------------
type float = System.Double

--------------------
type float<'Measure> = float
union case EstimationEngine.Optimiser.InDetachedSpace: EstimationEngine.Optimise<'data> -> EstimationEngine.Optimiser<'data>
val writeOut: System.Random
val n: EstimationEngine.WriteOut
val domain: EstimationEngine.EndCondition<float>
val f: EstimationEngine.Domain
val invalidOp: message: string -> 'T
val engine: EstimationEngine.EstimationEngine<float,float>
val mkContinuous: EstimationEngine.EstimationEngine<float,float>
<summary>A basic estimation engine for ordinary differential equations, using a Nelder-Mead optimiser.</summary>
val withCustomOptimisation: optim: EstimationEngine.Optimiser<'a> -> engine: EstimationEngine.EstimationEngine<'a,'b> -> EstimationEngine.EstimationEngine<'a,'b>
namespace Bristlecone.Optimisation
val settings: MonteCarlo.SimulatedAnnealing.AnnealSettings<float>
module MonteCarlo from Bristlecone.Optimisation
<summary> A module containing Monte Carlo Markov Chain (MCMC) methods for optimisation. An introduction to MCMC approaches is provided by [Reali, Priami, and Marchetti (2017)](https://doi.org/10.3389/fams.2017.00006) </summary>
module SimulatedAnnealing from Bristlecone.Optimisation.MonteCarlo
<summary> A meta-heuristic that approximates a global optimium by simulating slow cooling as a slow decrease in the probability of temporarily accepting worse solutions. </summary>
type AnnealSettings<'a> = { HeatStepLength: EndCondition<'a> HeatRamp: (float -> float) TemperatureCeiling: float option BoilingAcceptanceRate: float InitialTemperature: float PreTuneLength: int TuneLength: int TuneN: int AnnealStepLength: EndCondition<'a> } static member Default: AnnealSettings<float>
<summary> Represents configurable settings of an annealing procedure that supports (a) heating, followed by (b) annealing. </summary>
val classicalSimulatedAnnealing: scale: float -> tDependentProposal: bool -> settings: MonteCarlo.SimulatedAnnealing.AnnealSettings<float> -> EstimationEngine.Optimiser<float>
<summary> Candidate distribution: Gaussian univariate [] Probability: Boltzmann Machine </summary>
val fastSimulatedAnnealing: scale: float -> tDependentProposal: bool -> settings: MonteCarlo.SimulatedAnnealing.AnnealSettings<float> -> EstimationEngine.Optimiser<float>
<summary> Candidate distribution: Cauchy univariate [] Probability: Bottzmann Machine </summary>
val settingsNM: Amoeba.Solver.Settings
module Amoeba from Bristlecone.Optimisation
<summary> Nelder Mead implementation Adapted from original at: https://github.com/mathias-brandewinder/Amoeba </summary>
module Solver from Bristlecone.Optimisation.Amoeba
val Default: Amoeba.Solver.Settings
Multiple items
val single: EstimationEngine.Optimise<float>

--------------------
type single = System.Single

--------------------
type single<'Measure> = float32<'Measure>
type Optimise<'data> = System.Random -> EstimationEngine.WriteOut -> EstimationEngine.EndCondition<'data> -> EstimationEngine.Domain -> EstimationEngine.Point<'data> option -> ('data array -> 'data) -> EstimationEngine.Solution<'data> list
val solve: settings: Amoeba.Solver.Settings -> rng: System.Random -> writeOut: (Logging.LogEvent -> unit) -> atEnd: (EstimationEngine.Solution<float> list -> int -> bool) -> domain: (float * float * Parameter.Constraint) array -> 'a -> f: (float array -> float) -> EstimationEngine.Solution<float> list
val levels: int
val individuals: int
val swarm: levels: int -> amoebaAtLevel: int -> settings: Amoeba.Solver.Settings -> EstimationEngine.Optimiser<float>
<summary> Optimisation heuristic that creates a swarm of amoeba (Nelder-Mead) solvers. The swarm proceeds for `numberOfLevels` levels, constraining the starting bounds at each level to the 80th percentile of the current set of best likelihoods. </summary>
val randomWalk: tuningSteps: MonteCarlo.TuneStep<float> seq -> EstimationEngine.Optimiser<float>
<summary> A Markov Chain Monte Carlo (MCMC) sampling algorithm that randomly 'walks' through a n-dimensional posterior distribution of the parameter space. Specify `tuningSteps` to prime the jump size before random walk. </summary>
type TuneMethod = | Covariance of float | Scale | CovarianceWithScale of float | CovarianceWithScaleTotalHistory of float
union case MonteCarlo.TuneMethod.Scale: MonteCarlo.TuneMethod
module EndConditions from Bristlecone.Optimisation
val afterIteration: iteration: int -> EstimationEngine.Solution<float> list -> currentIteration: int -> bool
<summary> End the optimisation procedure when a minimum number of iterations is exceeded. </summary>
val weighting: float
val frequency: int
val adaptiveMetropolis: weighting: float -> period: MonteCarlo.Frequency -> EstimationEngine.Optimiser<float>
<summary> A Markov Chain Monte Carlo (MCMC) sampling algorithm that continually adjusts the covariance matrix based on the recently-sampled posterior distribution. Proposed jumps are therefore tuned to the recent history of accepted jumps. </summary>
module Filzbach from Bristlecone.Optimisation.MonteCarlo
<summary> An adaptation of the Filzbach method (originally by Drew Purves) </summary>
val settingsFB: FilzbachSettings<float>
val filzbach: settings: FilzbachSettings<float> -> EstimationEngine.Optimiser<float>
<summary> A Monte Carlo Markov Chain sampler based on the 'Filzbach' algorithm from Microsoft Research Cambridge. </summary>

Type something to start searching.