From 158f8deed9937c2a1d50f91a6e3500284e3a0ccd Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Fri, 2 Feb 2024 17:42:17 +0000 Subject: [PATCH] build based on 0454929 --- previews/PR571/.documenter-siteinfo.json | 2 +- previews/PR571/api/index.html | 86 +++++++++---------- previews/PR571/index.html | 2 +- .../PR571/tutorials/prob-interface/index.html | 2 +- 4 files changed, 46 insertions(+), 46 deletions(-) diff --git a/previews/PR571/.documenter-siteinfo.json b/previews/PR571/.documenter-siteinfo.json index 5063a905a..ecbcd0e14 100644 --- a/previews/PR571/.documenter-siteinfo.json +++ b/previews/PR571/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-02-02T16:57:10","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-02-02T17:42:12","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/previews/PR571/api/index.html b/previews/PR571/api/index.html index a3d8b8e8b..5dc817f02 100644 --- a/previews/PR571/api/index.html +++ b/previews/PR571/api/index.html @@ -1,7 +1,7 @@ API · DynamicPPL

API

Part of the API of DynamicPPL is defined in the more lightweight interface package AbstractPPL.jl and reexported here.

Model

Macros

A core component of DynamicPPL is the @model macro. It can be used to define probabilistic models in an intuitive way by specifying random variables and their distributions with ~ statements. These statements are rewritten by @model as calls of internal functions for sampling the variables and computing their log densities.

DynamicPPL.@modelMacro
@model(expr[, warn = false])

Macro to specify a probabilistic model.

If warn is true, a warning is displayed if internal variable names are used in the model definition.

Examples

Model definition:

@model function model(x, y = 42)
     ...
-end

To generate a Model, call model(xvalue) or model(xvalue, yvalue).

source

One can nest models and call another model inside the model function with @submodel.

One can nest models and call another model inside the model function with @submodel.

DynamicPPL.@submodelMacro
@submodel model
 @submodel ... = model

Run a Turing model nested inside of a Turing model.

Examples

julia> @model function demo1(x)
            x ~ Normal()
            return 1 + abs(x)
@@ -17,7 +17,7 @@
 false

We can check that the log joint probability of the model accumulated in vi is correct:

julia> x = vi[@varname(x)];
 
 julia> getlogp(vi) ≈ logpdf(Normal(), x) + logpdf(Uniform(0, 1 + abs(x)), 0.4)
-true
source
@submodel prefix=... model
+true
source
@submodel prefix=... model
 @submodel prefix=... ... = model

Run a Turing model nested inside of a Turing model and add "prefix." as a prefix to all random variables inside of the model.

Valid expressions for prefix=... are:

  • prefix=false: no prefix is used.
  • prefix=true: attempt to automatically determine the prefix from the left-hand side ... = model by first converting into a VarName, and then calling Symbol on this.
  • prefix=expression: results in the prefix Symbol(expression).

The prefix makes it possible to run the same Turing model multiple times while keeping track of all random variables correctly.

Examples

Example models

julia> @model function demo1(x)
            x ~ Normal()
            return 1 + abs(x)
@@ -94,7 +94,7 @@
 julia> # (×) Automatic prefixing without a left-hand side expression does not work!
        @model submodel_prefix_error() = @submodel prefix=true inner()
 ERROR: LoadError: cannot automatically prefix with no left-hand side
-[...]

Notes

  • The choice prefix=expression means that the prefixing will incur a runtime cost. This is also the case for prefix=true, depending on whether the expression on the the right-hand side of ... = model requires runtime-information or not, e.g. x = model will result in the static prefix x, while x[i] = model will be resolved at runtime.
source

Type

A Model can be created by calling the model function, as defined by @model.

DynamicPPL.ModelType
struct Model{F,argnames,defaultnames,missings,Targs,Tdefaults}
+[...]

Notes

  • The choice prefix=expression means that the prefixing will incur a runtime cost. This is also the case for prefix=true, depending on whether the expression on the the right-hand side of ... = model requires runtime-information or not, e.g. x = model will result in the static prefix x, while x[i] = model will be resolved at runtime.
source

Type

A Model can be created by calling the model function, as defined by @model.

DynamicPPL.ModelType
struct Model{F,argnames,defaultnames,missings,Targs,Tdefaults}
     f::F
     args::NamedTuple{argnames,Targs}
     defaults::NamedTuple{defaultnames,Tdefaults}
@@ -105,7 +105,7 @@
 Model{typeof(f),(:x, :y),(:x,),(),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
 
 julia> Model{(:y,)}(f, (x = 1.0, y = 2.0), (x = 42,)) # with special definition of missings
-Model{typeof(f),(:x, :y),(:x,),(:y,),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
source

Models are callable structs.

DynamicPPL.ModelMethod
(model::Model)([rng, varinfo, sampler, context])

Sample from the model using the sampler with random number generator rng and the context, and store the sample and log joint probability in varinfo.

The method resets the log joint probability of varinfo and increases the evaluation number of sampler.

source

Basic properties of a model can be accessed with getargnames, getmissings, and nameof.

Evaluation

With rand one can draw samples from the prior distribution of a Model.

Base.randFunction
rand([rng=Random.default_rng()], [T=NamedTuple], model::Model)

Generate a sample of type T from the prior distribution of the model.

source

One can also evaluate the log prior, log likelihood, and log joint probability.

DynamicPPL.logpriorFunction
logprior(model::Model, varinfo::AbstractVarInfo)

Return the log prior probability of variables varinfo for the probabilistic model.

See also logjoint and loglikelihood.

source
logprior(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log prior probabilities evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
+Model{typeof(f),(:x, :y),(:x,),(:y,),Tuple{Float64,Float64},Tuple{Int64}}(f, (x = 1.0, y = 2.0), (x = 42,))
source

Models are callable structs.

DynamicPPL.ModelMethod
(model::Model)([rng, varinfo, sampler, context])

Sample from the model using the sampler with random number generator rng and the context, and store the sample and log joint probability in varinfo.

The method resets the log joint probability of varinfo and increases the evaluation number of sampler.

source

Basic properties of a model can be accessed with getargnames, getmissings, and nameof.

Evaluation

With rand one can draw samples from the prior distribution of a Model.

Base.randFunction
rand([rng=Random.default_rng()], [T=NamedTuple], model::Model)

Generate a sample of type T from the prior distribution of the model.

source

One can also evaluate the log prior, log likelihood, and log joint probability.

DynamicPPL.logpriorFunction
logprior(model::Model, varinfo::AbstractVarInfo)

Return the log prior probability of variables varinfo for the probabilistic model.

See also logjoint and loglikelihood.

source
logprior(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log prior probabilities evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
 
 julia> @model function demo_model(x)
            s ~ InverseGamma(2, 3)
@@ -118,7 +118,7 @@
 julia> # construct a chain of samples using MCMCChains
        chain = Chains(rand(10, 2, 3), [:s, :m]);
 
-julia> logprior(demo_model([1., 2.]), chain);
source
logprior(model::Model, θ)

Return the log prior probability of variables θ for the probabilistic model.

See also logjoint and loglikelihood.

Examples

julia> @model function demo(x)
+julia> logprior(demo_model([1., 2.]), chain);
source
logprior(model::Model, θ)

Return the log prior probability of variables θ for the probabilistic model.

See also logjoint and loglikelihood.

Examples

julia> @model function demo(x)
            m ~ Normal()
            for i in eachindex(x)
                x[i] ~ Normal(m, 1.0)
@@ -136,7 +136,7 @@
 
 julia> # Truth.
        logpdf(Normal(), 100.0)
--5000.918938533205
source
StatsAPI.loglikelihoodFunction
loglikelihood(model::Model, varinfo::AbstractVarInfo)

Return the log likelihood of variables varinfo for the probabilistic model.

See also logjoint and logprior.

source
loglikelihood(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log likelihoods evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
+-5000.918938533205
source
StatsAPI.loglikelihoodFunction
loglikelihood(model::Model, varinfo::AbstractVarInfo)

Return the log likelihood of variables varinfo for the probabilistic model.

See also logjoint and logprior.

source
loglikelihood(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log likelihoods evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
 
 julia> @model function demo_model(x)
            s ~ InverseGamma(2, 3)
@@ -149,7 +149,7 @@
 julia> # construct a chain of samples using MCMCChains
        chain = Chains(rand(10, 2, 3), [:s, :m]);
 
-julia> loglikelihood(demo_model([1., 2.]), chain);
source
loglikelihood(model::Model, θ)

Return the log likelihood of variables θ for the probabilistic model.

See also logjoint and logprior.

Examples

julia> @model function demo(x)
+julia> loglikelihood(demo_model([1., 2.]), chain);
source
loglikelihood(model::Model, θ)

Return the log likelihood of variables θ for the probabilistic model.

See also logjoint and logprior.

Examples

julia> @model function demo(x)
            m ~ Normal()
            for i in eachindex(x)
                x[i] ~ Normal(m, 1.0)
@@ -167,7 +167,7 @@
 
 julia> # Truth.
        logpdf(Normal(100.0, 1.0), 1.0)
--4901.418938533205
source
DynamicPPL.logjointFunction
logjoint(model::Model, varinfo::AbstractVarInfo)

Return the log joint probability of variables varinfo for the probabilistic model.

See logjoint and loglikelihood.

source
logjoint(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log joint probabilities evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
+-4901.418938533205
source
DynamicPPL.logjointFunction
logjoint(model::Model, varinfo::AbstractVarInfo)

Return the log joint probability of variables varinfo for the probabilistic model.

See logjoint and loglikelihood.

source
logjoint(model::Model, chain::AbstractMCMC.AbstractChains)

Return an array of log joint probabilities evaluated at each sample in an MCMC chain.

Examples

julia> using MCMCChains, Distributions
 
 julia> @model function demo_model(x)
            s ~ InverseGamma(2, 3)
@@ -180,7 +180,7 @@
 julia> # construct a chain of samples using MCMCChains
        chain = Chains(rand(10, 2, 3), [:s, :m]);
 
-julia> logjoint(demo_model([1., 2.]), chain);
source
logjoint(model::Model, θ)

Return the log joint probability of variables θ for the probabilistic model.

See logjoint and loglikelihood.

Examples

julia> @model function demo(x)
+julia> logjoint(demo_model([1., 2.]), chain);
source
logjoint(model::Model, θ)

Return the log joint probability of variables θ for the probabilistic model.

See logjoint and loglikelihood.

Examples

julia> @model function demo(x)
            m ~ Normal()
            for i in eachindex(x)
                x[i] ~ Normal(m, 1.0)
@@ -198,7 +198,7 @@
 
 julia> # Truth.
        logpdf(Normal(100.0, 1.0), 1.0) + logpdf(Normal(), 100.0)
--9902.33787706641
source

LogDensityProblems.jl interface

The LogDensityProblems.jl interface is also supported by simply wrapping a Model in a DynamicPPL.LogDensityFunction:

DynamicPPL.LogDensityFunctionType
LogDensityFunction

A callable representing a log density function of a model.

Fields

  • varinfo: varinfo used for evaluation

  • model: model used for evaluation

  • context: context used for evaluation

Examples

julia> using Distributions
+-9902.33787706641
source

LogDensityProblems.jl interface

The LogDensityProblems.jl interface is also supported by simply wrapping a Model in a DynamicPPL.LogDensityFunction:

DynamicPPL.LogDensityFunctionType
LogDensityFunction

A callable representing a log density function of a model.

Fields

  • varinfo: varinfo used for evaluation

  • model: model used for evaluation

  • context: context used for evaluation

Examples

julia> using Distributions
 
 julia> using DynamicPPL: LogDensityFunction, contextualize
 
@@ -231,7 +231,7 @@
        f_prior = LogDensityFunction(contextualize(model, DynamicPPL.PriorContext()), VarInfo(model));
 
 julia> LogDensityProblems.logdensity(f_prior, [0.0]) == logpdf(Normal(), 0.0)
-true
source

Condition and decondition

A Model can be conditioned on a set of observations with AbstractPPL.condition or its alias |.

Base.:|Method
model | (x = 1.0, ...)

Return a Model which now treats variables on the right-hand side as observations.

See condition for more information and examples.

source

Condition and decondition

A Model can be conditioned on a set of observations with AbstractPPL.condition or its alias |.

Base.:|Method
model | (x = 1.0, ...)

Return a Model which now treats variables on the right-hand side as observations.

See condition for more information and examples.

source
AbstractPPL.conditionFunction
condition(model::Model; values...)
 condition(model::Model, values::NamedTuple)

Return a Model which now treats the variables in values as observations.

See also: decondition, conditioned

Limitations

This does currently not work with variables that are provided to the model as arguments, e.g. @model function demo(x) ... end means that condition will not affect the variable x.

Therefore if one wants to make use of condition and decondition one should not be specifying any random variables as arguments.

This is done for the sake of backwards compatibility.

Examples

Simple univariate model

julia> using Distributions
 
 julia> @model function demo()
@@ -340,8 +340,8 @@
 
 julia> keys(VarInfo(demo_outer_prefix()))
 1-element Vector{VarName{Symbol("inner.m"), Setfield.IdentityLens}}:
- inner.m

From this we can tell what the correct way to condition m within demo_inner is in the two different models.

source
condition([context::AbstractContext,] values::NamedTuple)
-condition([context::AbstractContext]; values...)

Return ConditionContext with values and context if values is non-empty, otherwise return context which is DefaultContext by default.

See also: decondition

source
DynamicPPL.conditionedFunction
conditioned(model::Model)

Return the conditioned values in model.

Examples

julia> using Distributions
+ inner.m

From this we can tell what the correct way to condition m within demo_inner is in the two different models.

source
condition([context::AbstractContext,] values::NamedTuple)
+condition([context::AbstractContext]; values...)

Return ConditionContext with values and context if values is non-empty, otherwise return context which is DefaultContext by default.

See also: decondition

source
DynamicPPL.conditionedFunction
conditioned(model::Model)

Return the conditioned values in model.

Examples

julia> using Distributions
 
 julia> using DynamicPPL: conditioned, contextualize
 
@@ -379,7 +379,7 @@
 1.0
 
 julia> keys(VarInfo(cm)) # <= no variables are sampled
-VarName[]
source
conditioned(context::AbstractContext)

Return NamedTuple of values that are conditioned on under context`.

Note that this will recursively traverse the context stack and return a merged version of the condition values.

source

Similarly, one can specify with AbstractPPL.decondition that certain, or all, random variables are not observed.

AbstractPPL.deconditionFunction
decondition(model::Model)
+VarName[]
source
conditioned(context::AbstractContext)

Return NamedTuple of values that are conditioned on under context`.

Note that this will recursively traverse the context stack and return a merged version of the condition values.

source

Similarly, one can specify with AbstractPPL.decondition that certain, or all, random variables are not observed.

AbstractPPL.deconditionFunction
decondition(model::Model)
 decondition(model::Model, variables...)

Return a Model for which variables... are not considered observations. If no variables are provided, then all variables currently considered observations will no longer be.

This is essentially the inverse of condition. This also means that it suffers from the same limitiations.

Note that currently we only support variables to take on explicit values provided to condition.

Examples

julia> using Distributions
 
 julia> @model function demo()
@@ -457,7 +457,7 @@
        deconditioned_model_2 = deconditioned_model | (@varname(m[1]) => missing);
 
 julia> m = deconditioned_model_2(); (m[1] ≠ 1.0 && m[2] == 2.0)
-true
source
decondition(context::AbstractContext, syms...)

Return context but with syms no longer conditioned on.

Note that this recursively traverses contexts, deconditioning all along the way.

See also: condition

source

Fixing and unfixing

We can also fix a collection of variables in a Model to certain using fix.

This might seem quite similar to the aforementioned condition and its siblings, but they are indeed different operations:

  • conditioned variables are considered to be observations, and are thus included in the computation logjoint and loglikelihood, but not in logprior.
  • fixed variables are considered to be constant, and are thus not included in any log-probability computations.

The differences are more clearly spelled out in the docstring of fix below.

DynamicPPL.fixFunction
fix(model::Model; values...)
+true
source
decondition(context::AbstractContext, syms...)

Return context but with syms no longer conditioned on.

Note that this recursively traverses contexts, deconditioning all along the way.

See also: condition

source

Fixing and unfixing

We can also fix a collection of variables in a Model to certain using fix.

This might seem quite similar to the aforementioned condition and its siblings, but they are indeed different operations:

  • conditioned variables are considered to be observations, and are thus included in the computation logjoint and loglikelihood, but not in logprior.
  • fixed variables are considered to be constant, and are thus not included in any log-probability computations.

The differences are more clearly spelled out in the docstring of fix below.

DynamicPPL.fixFunction
fix(model::Model; values...)
 fix(model::Model, values::NamedTuple)

Return a Model which now treats the variables in values as fixed.

See also: unfix, fixed

Examples

Simple univariate model

julia> using Distributions
 
 julia> @model function demo()
@@ -579,8 +579,8 @@
 
 julia> # And the difference is the missing log-probability of `m`:
        logjoint(model_fixed, (x=1.0,)) + logpdf(Normal(), 1.0) == logjoint(model_conditioned, (x=1.0,))
-true
source
fix([context::AbstractContext,] values::NamedTuple)
-fix([context::AbstractContext]; values...)

Return FixedContext with values and context if values is non-empty, otherwise return context which is DefaultContext by default.

See also: unfix

source
DynamicPPL.fixedFunction
fixed(model::Model)

Return the fixed values in model.

Examples

julia> using Distributions
+true
source
fix([context::AbstractContext,] values::NamedTuple)
+fix([context::AbstractContext]; values...)

Return FixedContext with values and context if values is non-empty, otherwise return context which is DefaultContext by default.

See also: unfix

source
DynamicPPL.fixedFunction
fixed(model::Model)

Return the fixed values in model.

Examples

julia> using Distributions
 
 julia> using DynamicPPL: fixed, contextualize
 
@@ -618,7 +618,7 @@
 1.0
 
 julia> keys(VarInfo(cm)) # <= no variables are sampled
-VarName[]
source
fixed(context::AbstractContext)

Return the values that are fixed under context.

Note that this will recursively traverse the context stack and return a merged version of the fix values.

source

The difference between fix and condition is described in the docstring of fix above.

Similarly, we can unfix variables, i.e. return them to their original meaning:

DynamicPPL.unfixFunction
unfix(model::Model)
+VarName[]
source
fixed(context::AbstractContext)

Return the values that are fixed under context.

Note that this will recursively traverse the context stack and return a merged version of the fix values.

source

The difference between fix and condition is described in the docstring of fix above.

Similarly, we can unfix variables, i.e. return them to their original meaning:

DynamicPPL.unfixFunction
unfix(model::Model)
 unfix(model::Model, variables...)

Return a Model for which variables... are not considered fixed. If no variables are provided, then all variables currently considered fixed will no longer be.

This is essentially the inverse of fix. This also means that it suffers from the same limitiations.

Note that currently we only support variables to take on explicit values provided to fix.

Examples

julia> using Distributions
 
 julia> @model function demo()
@@ -696,7 +696,7 @@
        unfixed_model_2 = fix(unfixed_model, @varname(m[1]) => missing);
 
 julia> m = unfixed_model_2(); (m[1] ≠ 1.0 && m[2] == 2.0)
-true
source
unfix(context::AbstractContext, syms...)

Return context but with syms no longer fixed.

Note that this recursively traverses contexts, unfixing all along the way.

See also: fix

source

Utilities

It is possible to manually increase (or decrease) the accumulated log density from within a model function.

DynamicPPL.@addlogprob!Macro
@addlogprob!(ex)

Add the result of the evaluation of ex to the joint log probability.

Examples

This macro allows you to include arbitrary terms in the likelihood

julia> myloglikelihood(x, μ) = loglikelihood(Normal(μ, 1), x);
+true
source
unfix(context::AbstractContext, syms...)

Return context but with syms no longer fixed.

Note that this recursively traverses contexts, unfixing all along the way.

See also: fix

source

Utilities

It is possible to manually increase (or decrease) the accumulated log density from within a model function.

DynamicPPL.@addlogprob!Macro
@addlogprob!(ex)

Add the result of the evaluation of ex to the joint log probability.

Examples

This macro allows you to include arbitrary terms in the likelihood

julia> myloglikelihood(x, μ) = loglikelihood(Normal(μ, 1), x);
 
 julia> @model function demo(x)
            μ ~ Normal()
@@ -733,7 +733,7 @@
 true
 
 julia> loglikelihood(demo(x), (μ=0.2,)) ≈ myloglikelihood(x, 0.2)
-true
source

Return values of the model function for a collection of samples can be obtained with generated_quantities.

DynamicPPL.generated_quantitiesFunction
generated_quantities(model::Model, chain::AbstractChains)

Execute model for each of the samples in chain and return an array of the values returned by the model for each sample.

Examples

General

Often you might have additional quantities computed inside the model that you want to inspect, e.g.

@model function demo(x)
+true
source

Return values of the model function for a collection of samples can be obtained with generated_quantities.

DynamicPPL.generated_quantitiesFunction
generated_quantities(model::Model, chain::AbstractChains)

Execute model for each of the samples in chain and return an array of the values returned by the model for each sample.

Examples

General

Often you might have additional quantities computed inside the model that you want to inspect, e.g.

@model function demo(x)
     # sample and observe
     θ ~ Prior()
     x ~ Likelihood()
@@ -774,7 +774,7 @@
  (0.09270081916291417,)
  (0.043088571494005024,)
  (-0.16489786710222099,)
- (-0.16489786710222099,)
source
generated_quantities(model::Model, parameters::NamedTuple)
+ (-0.16489786710222099,)
source
generated_quantities(model::Model, parameters::NamedTuple)
 generated_quantities(model::Model, values, keys)
 generated_quantities(model::Model, values, keys)

Execute model with variables keys set to values and return the values returned by the model.

If a NamedTuple is given, keys=keys(parameters) and values=values(parameters).

Example

julia> using DynamicPPL, Distributions
 
@@ -797,7 +797,7 @@
 (0.0,)
 
 julia> generated_quantities(model, values(parameters), keys(parameters))
-(0.0,)
source

For a chain of samples, one can compute the pointwise log-likelihoods of each observed random variable with pointwise_loglikelihoods.

DynamicPPL.pointwise_loglikelihoodsFunction
pointwise_loglikelihoods(model::Model, chain::Chains, keytype = String)

Runs model on each sample in chain returning a OrderedDict{String, Matrix{Float64}} with keys corresponding to symbols of the observations, and values being matrices of shape (num_chains, num_samples).

keytype specifies what the type of the keys used in the returned OrderedDict are. Currently, only String and VarName are supported.

Notes

Say y is a Vector of n i.i.d. Normal(μ, σ) variables, with μ and σ both being <:Real. Then the observe (i.e. when the left-hand side is an observation) statements can be implemented in three ways:

  1. using a for loop:
for i in eachindex(y)
+(0.0,)
source

For a chain of samples, one can compute the pointwise log-likelihoods of each observed random variable with pointwise_loglikelihoods.

DynamicPPL.pointwise_loglikelihoodsFunction
pointwise_loglikelihoods(model::Model, chain::Chains, keytype = String)

Runs model on each sample in chain returning a OrderedDict{String, Matrix{Float64}} with keys corresponding to symbols of the observations, and values being matrices of shape (num_chains, num_samples).

keytype specifies what the type of the keys used in the returned OrderedDict are. Currently, only String and VarName are supported.

Notes

Say y is a Vector of n i.i.d. Normal(μ, σ) variables, with μ and σ both being <:Real. Then the observe (i.e. when the left-hand side is an observation) statements can be implemented in three ways:

  1. using a for loop:
for i in eachindex(y)
     y[i] ~ Normal(μ, σ)
 end
  1. using .~:
y .~ Normal(μ, σ)
  1. using MvNormal:
y ~ MvNormal(fill(μ, n), σ^2 * I)

In (1) and (2), y will be treated as a collection of n i.i.d. 1-dimensional variables, while in (3) y will be treated as a single n-dimensional observation.

This is important to keep in mind, in particular if the computation is used for downstream computations.

Examples

From chain

julia> using DynamicPPL, Turing
 
@@ -847,7 +847,7 @@
 julia> m = demo([1.0; 1.0]);
 
 julia> ℓ = pointwise_loglikelihoods(m, VarInfo(m)); first.((ℓ[@varname(x[1])], ℓ[@varname(x[2])]))
-(-1.4189385332046727, -1.4189385332046727)
source

For converting a chain into a format that can more easily be fed into a Model again, for example using condition, you can use value_iterator_from_chain.

DynamicPPL.value_iterator_from_chainFunction
value_iterator_from_chain(model::Model, chain)
+(-1.4189385332046727, -1.4189385332046727)
source

For converting a chain into a format that can more easily be fed into a Model again, for example using condition, you can use value_iterator_from_chain.

DynamicPPL.value_iterator_from_chainFunction
value_iterator_from_chain(model::Model, chain)
 value_iterator_from_chain(varinfo::AbstractVarInfo, chain)

Return an iterator over the values in chain for each variable in model/varinfo.

Example

julia> using MCMCChains, DynamicPPL, Distributions, StableRNGs
 
 julia> rng = StableRNG(42);
@@ -891,7 +891,7 @@
        conditioned_model = model | first(iter);
 
 julia> conditioned_model()  # <= results in same values as the `first(iter)` above
-(0.5805148626851955, 0.7393275279160691)
source

Sometimes it can be useful to extract the priors of a model. This is the possible using extract_priors.

DynamicPPL.extract_priorsFunction
extract_priors([rng::Random.AbstractRNG, ]model::Model)

Extract the priors from a model.

This is done by sampling from the model and recording the distributions that are used to generate the samples.

Warning

Because the extraction is done by execution of the model, there are several caveats:

  1. If one variable, say, y ~ Normal(0, x), where x ~ Normal() is also a random variable, then the extracted prior will have different parameters in every extraction!
  2. If the model does not have static support, say, n ~ Categorical(1:10); x ~ MvNormmal(zeros(n), I), then the extracted priors themselves will be different between extractions, not just their parameters.

Both of these caveats are demonstrated below.

Examples

Changing parameters

julia> using Distributions, StableRNGs
+(0.5805148626851955, 0.7393275279160691)
source

Sometimes it can be useful to extract the priors of a model. This is the possible using extract_priors.

DynamicPPL.extract_priorsFunction
extract_priors([rng::Random.AbstractRNG, ]model::Model)

Extract the priors from a model.

This is done by sampling from the model and recording the distributions that are used to generate the samples.

Warning

Because the extraction is done by execution of the model, there are several caveats:

  1. If one variable, say, y ~ Normal(0, x), where x ~ Normal() is also a random variable, then the extracted prior will have different parameters in every extraction!
  2. If the model does not have static support, say, n ~ Categorical(1:10); x ~ MvNormmal(zeros(n), I), then the extracted priors themselves will be different between extractions, not just their parameters.

Both of these caveats are demonstrated below.

Examples

Changing parameters

julia> using Distributions, StableRNGs
 
 julia> rng = StableRNG(42);
 
@@ -921,7 +921,7 @@
 6
 
 julia> length(extract_priors(rng, model)[@varname(x)])
-9
source
DynamicPPL.NamedDistType

A named distribution that carries the name of the random variable with it.

source

Testing Utilities

DynamicPPL provides several demo models and helpers for testing samplers in the DynamicPPL.TestUtils submodule.

DynamicPPL.TestUtils.test_samplerFunction
test_sampler(models, sampler, args...; kwargs...)

Test that sampler produces correct marginal posterior means on each model in models.

In short, this method iterates through models, calls AbstractMCMC.sample on the model and sampler to produce a chain, and then checks marginal_mean_of_samples(chain, vn) for every (leaf) varname vn against the corresponding value returned by posterior_mean for each model.

To change how comparison is done for a particular chain type, one can overload marginal_mean_of_samples for the corresponding type.

Arguments

  • models: A collection of instaces of DynamicPPL.Model to test on.
  • sampler: The AbstractMCMC.AbstractSampler to test.
  • args...: Arguments forwarded to sample.

Keyword arguments

  • varnames_filter: A filter to apply to varnames(model), allowing comparison for only a subset of the varnames.
  • atol=1e-1: Absolute tolerance used in @test.
  • rtol=1e-3: Relative tolerance used in @test.
  • kwargs...: Keyword arguments forwarded to sample.
source
DynamicPPL.TestUtils.test_sampler_on_demo_modelsFunction
test_sampler_on_demo_models(meanfunction, sampler, args...; kwargs...)

Test sampler on every model in DEMO_MODELS.

This is just a proxy for test_sampler(meanfunction, DEMO_MODELS, sampler, args...; kwargs...).

source
DynamicPPL.TestUtils.test_sampler_continuousFunction
test_sampler_continuous(sampler, args...; kwargs...)

Test that sampler produces the correct marginal posterior means on all models in demo_models.

As of right now, this is just an alias for test_sampler_on_demo_models.

source
DynamicPPL.TestUtils.marginal_mean_of_samplesFunction
marginal_mean_of_samples(chain, varname)

Return the mean of variable represented by varname in chain.

source
DynamicPPL.TestUtils.DEMO_MODELSConstant

A collection of models corresponding to the posterior distribution defined by the generative process

s ~ InverseGamma(2, 3)
+9
source
DynamicPPL.NamedDistType

A named distribution that carries the name of the random variable with it.

source

Testing Utilities

DynamicPPL provides several demo models and helpers for testing samplers in the DynamicPPL.TestUtils submodule.

DynamicPPL.TestUtils.test_samplerFunction
test_sampler(models, sampler, args...; kwargs...)

Test that sampler produces correct marginal posterior means on each model in models.

In short, this method iterates through models, calls AbstractMCMC.sample on the model and sampler to produce a chain, and then checks marginal_mean_of_samples(chain, vn) for every (leaf) varname vn against the corresponding value returned by posterior_mean for each model.

To change how comparison is done for a particular chain type, one can overload marginal_mean_of_samples for the corresponding type.

Arguments

  • models: A collection of instaces of DynamicPPL.Model to test on.
  • sampler: The AbstractMCMC.AbstractSampler to test.
  • args...: Arguments forwarded to sample.

Keyword arguments

  • varnames_filter: A filter to apply to varnames(model), allowing comparison for only a subset of the varnames.
  • atol=1e-1: Absolute tolerance used in @test.
  • rtol=1e-3: Relative tolerance used in @test.
  • kwargs...: Keyword arguments forwarded to sample.
source
DynamicPPL.TestUtils.test_sampler_on_demo_modelsFunction
test_sampler_on_demo_models(meanfunction, sampler, args...; kwargs...)

Test sampler on every model in DEMO_MODELS.

This is just a proxy for test_sampler(meanfunction, DEMO_MODELS, sampler, args...; kwargs...).

source
DynamicPPL.TestUtils.test_sampler_continuousFunction
test_sampler_continuous(sampler, args...; kwargs...)

Test that sampler produces the correct marginal posterior means on all models in demo_models.

As of right now, this is just an alias for test_sampler_on_demo_models.

source
DynamicPPL.TestUtils.marginal_mean_of_samplesFunction
marginal_mean_of_samples(chain, varname)

Return the mean of variable represented by varname in chain.

source
DynamicPPL.TestUtils.DEMO_MODELSConstant

A collection of models corresponding to the posterior distribution defined by the generative process

s ~ InverseGamma(2, 3)
 m ~ Normal(0, √s)
 1.5 ~ Normal(m, √s)
 2.0 ~ Normal(m, √s)

or by

s[1] ~ InverseGamma(2, 3)
@@ -933,9 +933,9 @@
 mean(m) == 7 / 6

And for the multivariate one (the latter one):

mean(s[1]) == 19 / 8
 mean(m[1]) == 3 / 4
 mean(s[2]) == 8 / 3
-mean(m[2]) == 1
source

For every demo model, one can define the true log prior, log likelihood, and log joint probabilities.

DynamicPPL.TestUtils.logprior_trueFunction
logprior_true(model, args...)

Return the logprior of model for args.

This should generally be implemented by hand for every specific model.

See also: logjoint_true, loglikelihood_true.

source
DynamicPPL.TestUtils.loglikelihood_trueFunction
loglikelihood_true(model, args...)

Return the loglikelihood of model for args.

This should generally be implemented by hand for every specific model.

See also: logjoint_true, logprior_true.

source
DynamicPPL.TestUtils.logjoint_trueFunction
logjoint_true(model, args...)

Return the logjoint of model for args.

Defaults to logprior_true(model, args...) + loglikelihood_true(model, args..).

This should generally be implemented by hand for every specific model so that the returned value can be used as a ground-truth for testing things like:

  1. Validity of evaluation of model using a particular implementation of AbstractVarInfo.
  2. Validity of a sampler when combined with DynamicPPL by running the sampler twice: once targeting ground-truth functions, e.g. logjoint_true, and once targeting model.

And more.

See also: logprior_true, loglikelihood_true.

source

And in the case where the model includes constrained variables, it can also be useful to define

DynamicPPL.TestUtils.logprior_true_with_logabsdet_jacobianFunction
logprior_true_with_logabsdet_jacobian(model::Model, args...)

Return a tuple (args_unconstrained, logprior_unconstrained) of model for args....

Unlike logprior_true, the returned logprior computation includes the log-absdet-jacobian adjustment, thus computing logprior for the unconstrained variables.

Note that args are assumed be in the support of model, while args_unconstrained will be unconstrained.

See also: logprior_true.

source
DynamicPPL.TestUtils.logjoint_true_with_logabsdet_jacobianFunction
logjoint_true_with_logabsdet_jacobian(model::Model, args...)

Return a tuple (args_unconstrained, logjoint) of model for args.

Unlike logjoint_true, the returned logjoint computation includes the log-absdet-jacobian adjustment, thus computing logjoint for the unconstrained variables.

Note that args are assumed be in the support of model, while args_unconstrained will be unconstrained.

This should generally not be implemented directly, instead one should implement logprior_true_with_logabsdet_jacobian for a given model.

See also: logjoint_true, logprior_true_with_logabsdet_jacobian.

source

Finally, the following methods can also be of use:

DynamicPPL.TestUtils.varnamesFunction
varnames(model::Model)

Return a collection of VarName as they are expected to appear in the model.

Even though it is recommended to implement this by hand for a particular Model, a default implementation using SimpleVarInfo{<:Dict} is provided.

source
DynamicPPL.TestUtils.posterior_meanFunction
posterior_mean(model::Model)

Return a NamedTuple compatible with varnames(model) where the values represent the posterior mean under model.

"Compatible" means that a varname from varnames(model) can be used to extract the corresponding value using get, e.g. get(posterior_mean(model), varname).

source
DynamicPPL.TestUtils.setup_varinfosFunction
setup_varinfos(model::Model, example_values::NamedTuple, varnames; include_threadsafe::Bool=false)

Return a tuple of instances for different implementations of AbstractVarInfo with each vi, supposedly, satisfying vi[vn] == get(example_values, vn) for vn in varnames.

If include_threadsafe is true, then the returned tuple will also include thread-safe versions of the varinfo instances.

source
DynamicPPL.TestUtils.update_values!!Function
update_values!!(vi::AbstractVarInfo, vals::NamedTuple, vns)

Return instance similar to vi but with vns set to values from vals.

source
DynamicPPL.TestUtils.test_valuesFunction
test_values(vi::AbstractVarInfo, vals::NamedTuple, vns)

Test that vi[vn] corresponds to the correct value in vals for every vn in vns.

source

Advanced

Variable names

Names and possibly nested indices of variables are described with AbstractPPL.VarName. They can be defined with AbstractPPL.@varname. Please see the documentation of AbstractPPL.jl for further information.

Data Structures of Variables

DynamicPPL provides different data structures for samples from the model and their log density. All of them are subtypes of AbstractVarInfo.

DynamicPPL.AbstractVarInfoType
AbstractVarInfo

Abstract supertype for data structures that capture random variables when executing a probabilistic model and accumulate log densities such as the log likelihood or the log joint probability of the model.

See also: VarInfo, SimpleVarInfo.

source

Common API

Accumulation of log-probabilities

DynamicPPL.getlogpFunction
getlogp(vi::AbstractVarInfo)

Return the log of the joint probability of the observed data and parameters sampled in vi.

source
DynamicPPL.setlogp!!Function
setlogp!!(vi::AbstractVarInfo, logp)

Set the log of the joint probability of the observed data and parameters sampled in vi to logp, mutating if it makes sense.

source
DynamicPPL.acclogp!!Function
acclogp!!([context::AbstractContext, ]vi::AbstractVarInfo, logp)

Add logp to the value of the log of the joint probability of the observed data and parameters sampled in vi, mutating if it makes sense.

source
DynamicPPL.resetlogp!!Function
resetlogp!!(vi::AbstractVarInfo)

Reset the value of the log of the joint probability of the observed data and parameters sampled in vi to 0, mutating if it makes sense.

source

Variables and their realizations

Base.keysFunction
keys(vi::AbstractVarInfo)

Return an iterator over all vns in vi.

source
Base.getindexFunction
getindex(vi::AbstractVarInfo, vn::VarName[, dist::Distribution])
-getindex(vi::AbstractVarInfo, vns::Vector{<:VarName}[, dist::Distribution])

Return the current value(s) of vn (vns) in vi in the support of its (their) distribution(s).

If dist is specified, the value(s) will be reshaped accordingly.

See also: getindex_raw(vi::AbstractVarInfo, vn::VarName, dist::Distribution)

source
DynamicPPL.getindex_rawFunction
getindex_raw(vi::AbstractVarInfo, vn::VarName[, dist::Distribution])
-getindex_raw(vi::AbstractVarInfo, vns::Vector{<:VarName}[, dist::Distribution])

Return the current value(s) of vn (vns) in vi.

If dist is specified, the value(s) will be reshaped accordingly.

See also: getindex(vi::AbstractVarInfo, vn::VarName, dist::Distribution)

Note

The difference between getindex(vi, vn, dist) and getindex_raw is that getindex will also transform the value(s) to the support of the distribution(s). This is not the case for getindex_raw.

source
BangBang.push!!Function
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution)

Push a new random variable vn with a sampled value r from a distribution dist to the VarInfo vi, mutating if it makes sense.

source
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, spl::AbstractSampler)

Push a new random variable vn with a sampled value r sampled with a sampler spl from a distribution dist to VarInfo vi, if it makes sense.

The sampler is passed here to invalidate its cache where defined.

Warning

This method is considered legacy, and is likely to be deprecated in the future.

source
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, gid::Selector)

Push a new random variable vn with a sampled value r sampled with a sampler of selector gid from a distribution dist to VarInfo vi.

Warning

This method is considered legacy, and is likely to be deprecated in the future.

source
BangBang.empty!!Function
empty!!(vi::AbstractVarInfo)

Empty the fields of vi.metadata and reset vi.logp[] and vi.num_produce[] to zeros.

This is useful when using a sampling algorithm that assumes an empty vi, e.g. SMC.

source
Base.isemptyFunction
isempty(vi::AbstractVarInfo)

Return true if vi is empty and false otherwise.

source
DynamicPPL.values_asFunction
values_as(varinfo[, Type])

Return the values/realizations in varinfo as Type, if implemented.

If no Type is provided, return values as stored in varinfo.

Examples

SimpleVarInfo with NamedTuple:

julia> data = (x = 1.0, m = [2.0]);
+mean(m[2]) == 1
source

For every demo model, one can define the true log prior, log likelihood, and log joint probabilities.

DynamicPPL.TestUtils.logprior_trueFunction
logprior_true(model, args...)

Return the logprior of model for args.

This should generally be implemented by hand for every specific model.

See also: logjoint_true, loglikelihood_true.

source
DynamicPPL.TestUtils.loglikelihood_trueFunction
loglikelihood_true(model, args...)

Return the loglikelihood of model for args.

This should generally be implemented by hand for every specific model.

See also: logjoint_true, logprior_true.

source
DynamicPPL.TestUtils.logjoint_trueFunction
logjoint_true(model, args...)

Return the logjoint of model for args.

Defaults to logprior_true(model, args...) + loglikelihood_true(model, args..).

This should generally be implemented by hand for every specific model so that the returned value can be used as a ground-truth for testing things like:

  1. Validity of evaluation of model using a particular implementation of AbstractVarInfo.
  2. Validity of a sampler when combined with DynamicPPL by running the sampler twice: once targeting ground-truth functions, e.g. logjoint_true, and once targeting model.

And more.

See also: logprior_true, loglikelihood_true.

source

And in the case where the model includes constrained variables, it can also be useful to define

DynamicPPL.TestUtils.logprior_true_with_logabsdet_jacobianFunction
logprior_true_with_logabsdet_jacobian(model::Model, args...)

Return a tuple (args_unconstrained, logprior_unconstrained) of model for args....

Unlike logprior_true, the returned logprior computation includes the log-absdet-jacobian adjustment, thus computing logprior for the unconstrained variables.

Note that args are assumed be in the support of model, while args_unconstrained will be unconstrained.

See also: logprior_true.

source
DynamicPPL.TestUtils.logjoint_true_with_logabsdet_jacobianFunction
logjoint_true_with_logabsdet_jacobian(model::Model, args...)

Return a tuple (args_unconstrained, logjoint) of model for args.

Unlike logjoint_true, the returned logjoint computation includes the log-absdet-jacobian adjustment, thus computing logjoint for the unconstrained variables.

Note that args are assumed be in the support of model, while args_unconstrained will be unconstrained.

This should generally not be implemented directly, instead one should implement logprior_true_with_logabsdet_jacobian for a given model.

See also: logjoint_true, logprior_true_with_logabsdet_jacobian.

source

Finally, the following methods can also be of use:

DynamicPPL.TestUtils.varnamesFunction
varnames(model::Model)

Return a collection of VarName as they are expected to appear in the model.

Even though it is recommended to implement this by hand for a particular Model, a default implementation using SimpleVarInfo{<:Dict} is provided.

source
DynamicPPL.TestUtils.posterior_meanFunction
posterior_mean(model::Model)

Return a NamedTuple compatible with varnames(model) where the values represent the posterior mean under model.

"Compatible" means that a varname from varnames(model) can be used to extract the corresponding value using get, e.g. get(posterior_mean(model), varname).

source
DynamicPPL.TestUtils.setup_varinfosFunction
setup_varinfos(model::Model, example_values::NamedTuple, varnames; include_threadsafe::Bool=false)

Return a tuple of instances for different implementations of AbstractVarInfo with each vi, supposedly, satisfying vi[vn] == get(example_values, vn) for vn in varnames.

If include_threadsafe is true, then the returned tuple will also include thread-safe versions of the varinfo instances.

source
DynamicPPL.TestUtils.update_values!!Function
update_values!!(vi::AbstractVarInfo, vals::NamedTuple, vns)

Return instance similar to vi but with vns set to values from vals.

source
DynamicPPL.TestUtils.test_valuesFunction
test_values(vi::AbstractVarInfo, vals::NamedTuple, vns)

Test that vi[vn] corresponds to the correct value in vals for every vn in vns.

source

Advanced

Variable names

Names and possibly nested indices of variables are described with AbstractPPL.VarName. They can be defined with AbstractPPL.@varname. Please see the documentation of AbstractPPL.jl for further information.

Data Structures of Variables

DynamicPPL provides different data structures for samples from the model and their log density. All of them are subtypes of AbstractVarInfo.

DynamicPPL.AbstractVarInfoType
AbstractVarInfo

Abstract supertype for data structures that capture random variables when executing a probabilistic model and accumulate log densities such as the log likelihood or the log joint probability of the model.

See also: VarInfo, SimpleVarInfo.

source

Common API

Accumulation of log-probabilities

DynamicPPL.getlogpFunction
getlogp(vi::AbstractVarInfo)

Return the log of the joint probability of the observed data and parameters sampled in vi.

source
DynamicPPL.setlogp!!Function
setlogp!!(vi::AbstractVarInfo, logp)

Set the log of the joint probability of the observed data and parameters sampled in vi to logp, mutating if it makes sense.

source
DynamicPPL.acclogp!!Function
acclogp!!([context::AbstractContext, ]vi::AbstractVarInfo, logp)

Add logp to the value of the log of the joint probability of the observed data and parameters sampled in vi, mutating if it makes sense.

source
DynamicPPL.resetlogp!!Function
resetlogp!!(vi::AbstractVarInfo)

Reset the value of the log of the joint probability of the observed data and parameters sampled in vi to 0, mutating if it makes sense.

source

Variables and their realizations

Base.keysFunction
keys(vi::AbstractVarInfo)

Return an iterator over all vns in vi.

source
Base.getindexFunction
getindex(vi::AbstractVarInfo, vn::VarName[, dist::Distribution])
+getindex(vi::AbstractVarInfo, vns::Vector{<:VarName}[, dist::Distribution])

Return the current value(s) of vn (vns) in vi in the support of its (their) distribution(s).

If dist is specified, the value(s) will be reshaped accordingly.

See also: getindex_raw(vi::AbstractVarInfo, vn::VarName, dist::Distribution)

source
DynamicPPL.getindex_rawFunction
getindex_raw(vi::AbstractVarInfo, vn::VarName[, dist::Distribution])
+getindex_raw(vi::AbstractVarInfo, vns::Vector{<:VarName}[, dist::Distribution])

Return the current value(s) of vn (vns) in vi.

If dist is specified, the value(s) will be reshaped accordingly.

See also: getindex(vi::AbstractVarInfo, vn::VarName, dist::Distribution)

Note

The difference between getindex(vi, vn, dist) and getindex_raw is that getindex will also transform the value(s) to the support of the distribution(s). This is not the case for getindex_raw.

source
BangBang.push!!Function
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution)

Push a new random variable vn with a sampled value r from a distribution dist to the VarInfo vi, mutating if it makes sense.

source
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, spl::AbstractSampler)

Push a new random variable vn with a sampled value r sampled with a sampler spl from a distribution dist to VarInfo vi, if it makes sense.

The sampler is passed here to invalidate its cache where defined.

Warning

This method is considered legacy, and is likely to be deprecated in the future.

source
push!!(vi::AbstractVarInfo, vn::VarName, r, dist::Distribution, gid::Selector)

Push a new random variable vn with a sampled value r sampled with a sampler of selector gid from a distribution dist to VarInfo vi.

Warning

This method is considered legacy, and is likely to be deprecated in the future.

source
BangBang.empty!!Function
empty!!(vi::AbstractVarInfo)

Empty the fields of vi.metadata and reset vi.logp[] and vi.num_produce[] to zeros.

This is useful when using a sampling algorithm that assumes an empty vi, e.g. SMC.

source
Base.isemptyFunction
isempty(vi::AbstractVarInfo)

Return true if vi is empty and false otherwise.

source
DynamicPPL.values_asFunction
values_as(varinfo[, Type])

Return the values/realizations in varinfo as Type, if implemented.

If no Type is provided, return values as stored in varinfo.

Examples

SimpleVarInfo with NamedTuple:

julia> data = (x = 1.0, m = [2.0]);
 
 julia> values_as(SimpleVarInfo(data))
 (x = 1.0, m = [2.0])
@@ -1009,11 +1009,11 @@
 julia> values_as(vi, Vector)
 2-element Vector{Real}:
  1.0
- 2.0
source

Transformations

DynamicPPL.AbstractTransformationType
abstract type AbstractTransformation

Represents a transformation to be used in link!! and invlink!!, amongst others.

A concrete implementation of this should implement the following methods:

And potentially:

See also: link!!, invlink!!, maybe_invlink_before_eval!!.

source
DynamicPPL.NoTransformationType
struct NoTransformation <: DynamicPPL.AbstractTransformation

Transformation which applies the identity function.

source
DynamicPPL.DynamicTransformationType
struct DynamicTransformation <: DynamicPPL.AbstractTransformation

Transformation which transforms the variables on a per-need-basis in the execution of a given Model.

This is in constrast to StaticTransformation which transforms all variables before the execution of a given Model.

See also: StaticTransformation.

source
DynamicPPL.StaticTransformationType
struct StaticTransformation{F} <: DynamicPPL.AbstractTransformation

Transformation which transforms all variables before the execution of a given Model.

This is done through the maybe_invlink_before_eval!! method.

See also: DynamicTransformation, maybe_invlink_before_eval!!.

Fields

  • bijector::Any: The function, assumed to implement the Bijectors interface, to be applied to the variables
source
DynamicPPL.istransFunction
istrans(vi::AbstractVarInfo[, vns::Union{VarName, AbstractVector{<:Varname}}])

Return true if vi is working in unconstrained space, and false if vi is assuming realizations to be in support of the corresponding distributions.

If vns is provided, then only check if this/these varname(s) are transformed.

Warning

Not all implementations of AbstractVarInfo support transforming only a subset of the variables.

source
DynamicPPL.settrans!!Function
settrans!!(vi::AbstractVarInfo, trans::Bool[, vn::VarName])

Return vi with istrans(vi, vn) evaluating to true.

If vn is not specified, then istrans(vi) evaluates to true for all variables.

source
DynamicPPL.transformationFunction
transformation(vi::AbstractVarInfo)

Return the AbstractTransformation related to vi.

source
Bijectors.linkFunction
link([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
-link([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their linked space without mutating vi, using the transformation t.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, invlink.

source
Bijectors.invlinkFunction
invlink([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
-invlink([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their constrained space without mutating vi, using the (inverse of) transformation t.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, link.

source
DynamicPPL.link!!Function
link!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
-link!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their linked space, using the transformation t, mutating vi if possible.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, invlink!!.

source
DynamicPPL.invlink!!Function
invlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
-invlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their constrained space, using the (inverse of) transformation t, mutating vi if possible.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, link!!.

source
DynamicPPL.default_transformationFunction
default_transformation(model::Model[, vi::AbstractVarInfo])

Return the AbstractTransformation currently related to model and, potentially, vi.

source
DynamicPPL.maybe_invlink_before_eval!!Function
maybe_invlink_before_eval!!([t::Transformation,] vi, context, model)

Return a possibly invlinked version of vi.

This will be called prior to model evaluation, allowing one to perform a single invlink!! before evaluation rather than lazyily evaluating the transforms on as-we-need basis as is done with DynamicTransformation.

See also: StaticTransformation, DynamicTransformation.

Examples

julia> using DynamicPPL, Distributions, Bijectors
+ 2.0
source

Transformations

DynamicPPL.AbstractTransformationType
abstract type AbstractTransformation

Represents a transformation to be used in link!! and invlink!!, amongst others.

A concrete implementation of this should implement the following methods:

And potentially:

See also: link!!, invlink!!, maybe_invlink_before_eval!!.

source
DynamicPPL.NoTransformationType
struct NoTransformation <: DynamicPPL.AbstractTransformation

Transformation which applies the identity function.

source
DynamicPPL.DynamicTransformationType
struct DynamicTransformation <: DynamicPPL.AbstractTransformation

Transformation which transforms the variables on a per-need-basis in the execution of a given Model.

This is in constrast to StaticTransformation which transforms all variables before the execution of a given Model.

See also: StaticTransformation.

source
DynamicPPL.StaticTransformationType
struct StaticTransformation{F} <: DynamicPPL.AbstractTransformation

Transformation which transforms all variables before the execution of a given Model.

This is done through the maybe_invlink_before_eval!! method.

See also: DynamicTransformation, maybe_invlink_before_eval!!.

Fields

  • bijector::Any: The function, assumed to implement the Bijectors interface, to be applied to the variables
source
DynamicPPL.istransFunction
istrans(vi::AbstractVarInfo[, vns::Union{VarName, AbstractVector{<:Varname}}])

Return true if vi is working in unconstrained space, and false if vi is assuming realizations to be in support of the corresponding distributions.

If vns is provided, then only check if this/these varname(s) are transformed.

Warning

Not all implementations of AbstractVarInfo support transforming only a subset of the variables.

source
DynamicPPL.settrans!!Function
settrans!!(vi::AbstractVarInfo, trans::Bool[, vn::VarName])

Return vi with istrans(vi, vn) evaluating to true.

If vn is not specified, then istrans(vi) evaluates to true for all variables.

source
DynamicPPL.transformationFunction
transformation(vi::AbstractVarInfo)

Return the AbstractTransformation related to vi.

source
Bijectors.linkFunction
link([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+link([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their linked space without mutating vi, using the transformation t.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, invlink.

source
Bijectors.invlinkFunction
invlink([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+invlink([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their constrained space without mutating vi, using the (inverse of) transformation t.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, link.

source
DynamicPPL.link!!Function
link!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+link!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their linked space, using the transformation t, mutating vi if possible.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, invlink!!.

source
DynamicPPL.invlink!!Function
invlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, model::Model)
+invlink!!([t::AbstractTransformation, ]vi::AbstractVarInfo, spl::AbstractSampler, model::Model)

Transform the variables in vi to their constrained space, using the (inverse of) transformation t, mutating vi if possible.

If t is not provided, default_transformation(model, vi) will be used.

See also: default_transformation, link!!.

source
DynamicPPL.default_transformationFunction
default_transformation(model::Model[, vi::AbstractVarInfo])

Return the AbstractTransformation currently related to model and, potentially, vi.

source
DynamicPPL.maybe_invlink_before_eval!!Function
maybe_invlink_before_eval!!([t::Transformation,] vi, context, model)

Return a possibly invlinked version of vi.

This will be called prior to model evaluation, allowing one to perform a single invlink!! before evaluation rather than lazyily evaluating the transforms on as-we-need basis as is done with DynamicTransformation.

See also: StaticTransformation, DynamicTransformation.

Examples

julia> using DynamicPPL, Distributions, Bijectors
 
 julia> @model demo() = x ~ Normal()
 demo (generic function with 2 methods)
@@ -1049,7 +1049,7 @@
 
 julia> # Now performs a single `invlink!!` before model evaluation.
        logjoint(model, vi_linked)
--1001.4189385332047
source
DynamicPPL.reconstructFunction
reconstruct([f, ]dist, val)

Reconstruct val so that it's compatible with dist.

If f is also provided, the reconstruct value will be such that f(reconstruct_val) is compatible with dist.

source

Utils

Base.mergeMethod
merge(varinfo, other_varinfos...)

Merge varinfos into one, giving precedence to the right-most varinfo when sensible.

This is particularly useful when combined with subset(varinfo, vns).

See docstring of subset(varinfo, vns) for examples.

source
DynamicPPL.subsetFunction
subset(varinfo::AbstractVarInfo, vns::AbstractVector{<:VarName})

Subset a varinfo to only contain the variables vns.

Warning

The ordering of the variables in the resulting varinfo is not guaranteed to follow the ordering of the variables in varinfo. Hence care must be taken, in particular when used in conjunction with other methods which uses the vector-representation of the varinfo, e.g. getindex(varinfo, sampler).

Examples

julia> @model function demo()
+-1001.4189385332047
source
DynamicPPL.reconstructFunction
reconstruct([f, ]dist, val)

Reconstruct val so that it's compatible with dist.

If f is also provided, the reconstruct value will be such that f(reconstruct_val) is compatible with dist.

source

Utils

Base.mergeMethod
merge(varinfo, other_varinfos...)

Merge varinfos into one, giving precedence to the right-most varinfo when sensible.

This is particularly useful when combined with subset(varinfo, vns).

See docstring of subset(varinfo, vns) for examples.

source
DynamicPPL.subsetFunction
subset(varinfo::AbstractVarInfo, vns::AbstractVector{<:VarName})

Subset a varinfo to only contain the variables vns.

Warning

The ordering of the variables in the resulting varinfo is not guaranteed to follow the ordering of the variables in varinfo. Hence care must be taken, in particular when used in conjunction with other methods which uses the vector-representation of the varinfo, e.g. getindex(varinfo, sampler).

Examples

julia> @model function demo()
            s ~ InverseGamma(2, 3)
            m ~ Normal(0, sqrt(s))
            x = Vector{Float64}(undef, 2)
@@ -1132,7 +1132,7 @@
  1.0
  2.0
  3.0
- 4.0

Notes

Type-stability

Warning

This function is only type-stable when vns contains only varnames with the same symbol. For exmaple, [@varname(m[1]), @varname(m[2])] will be type-stable, but [@varname(m[1]), @varname(x)] will not be.

source
DynamicPPL.unflattenFunction
unflatten(original, x::AbstractVector)

Return instance of original constructed from x.

source
unflatten(vi::AbstractVarInfo[, context::AbstractContext], x::AbstractVector)

Return a new instance of vi with the values of x assigned to the variables.

If context is provided, x is assumed to be realizations only for variables not filtered out by context.

source
DynamicPPL.varname_leavesFunction
varname_leaves(vn::VarName, val)

Return an iterator over all varnames that are represented by vn on val.

Examples

julia> using DynamicPPL: varname_leaves
+ 4.0

Notes

Type-stability

Warning

This function is only type-stable when vns contains only varnames with the same symbol. For exmaple, [@varname(m[1]), @varname(m[2])] will be type-stable, but [@varname(m[1]), @varname(x)] will not be.

source
DynamicPPL.unflattenFunction
unflatten(original, x::AbstractVector)

Return instance of original constructed from x.

source
unflatten(vi::AbstractVarInfo[, context::AbstractContext], x::AbstractVector)

Return a new instance of vi with the values of x assigned to the variables.

If context is provided, x is assumed to be realizations only for variables not filtered out by context.

source
DynamicPPL.varname_leavesFunction
varname_leaves(vn::VarName, val)

Return an iterator over all varnames that are represented by vn on val.

Examples

julia> using DynamicPPL: varname_leaves
 
 julia> foreach(println, varname_leaves(@varname(x), rand(2)))
 x[1]
@@ -1147,7 +1147,7 @@
 julia> foreach(println, varname_leaves(@varname(x), x))
 x.y
 x.z[1][1]
-x.z[2][1]
source
DynamicPPL.varname_and_value_leavesFunction
varname_and_value_leaves(vn::VarName, val)

Return an iterator over all varname-value pairs that are represented by vn on val.

Examples

julia> using DynamicPPL: varname_and_value_leaves
+x.z[2][1]
source
DynamicPPL.varname_and_value_leavesFunction
varname_and_value_leaves(vn::VarName, val)

Return an iterator over all varname-value pairs that are represented by vn on val.

Examples

julia> using DynamicPPL: varname_and_value_leaves
 
 julia> foreach(println, varname_and_value_leaves(@varname(x), 1:2))
 (x[1], 1)
@@ -1188,7 +1188,7 @@
        foreach(println, varname_and_value_leaves(@varname(x), Cholesky([1.0 0.0; 0.0 1.0], 'U', 0)))
 (x.U[1,1], 1.0)
 (x.U[1,2], 0.0)
-(x.U[2,2], 1.0)
source

SimpleVarInfo

DynamicPPL.SimpleVarInfoType
struct SimpleVarInfo{NT, T, C<:DynamicPPL.AbstractTransformation} <: AbstractVarInfo

A simple wrapper of the parameters with a logp field for accumulation of the logdensity.

Currently only implemented for NT<:NamedTuple and NT<:AbstractDict.

Fields

  • values: underlying representation of the realization represented

  • logp: holds the accumulated log-probability

  • transformation: represents whether it assumes variables to be transformed

Notes

The major differences between this and TypedVarInfo are:

  1. SimpleVarInfo does not require linearization.
  2. SimpleVarInfo can use more efficient bijectors.
  3. SimpleVarInfo is only type-stable if NT<:NamedTuple and either a) no indexing is used in tilde-statements, or b) the values have been specified with the correct shapes.

Examples

General usage

julia> using StableRNGs
+(x.U[2,2], 1.0)
source

SimpleVarInfo

DynamicPPL.SimpleVarInfoType
struct SimpleVarInfo{NT, T, C<:DynamicPPL.AbstractTransformation} <: AbstractVarInfo

A simple wrapper of the parameters with a logp field for accumulation of the logdensity.

Currently only implemented for NT<:NamedTuple and NT<:AbstractDict.

Fields

  • values: underlying representation of the realization represented

  • logp: holds the accumulated log-probability

  • transformation: represents whether it assumes variables to be transformed

Notes

The major differences between this and TypedVarInfo are:

  1. SimpleVarInfo does not require linearization.
  2. SimpleVarInfo can use more efficient bijectors.
  3. SimpleVarInfo is only type-stable if NT<:NamedTuple and either a) no indexing is used in tilde-statements, or b) the values have been specified with the correct shapes.

Examples

General usage

julia> using StableRNGs
 
 julia> @model function demo()
            m ~ Normal()
@@ -1324,19 +1324,19 @@
 
 julia> svi_dict[@varname(m.b)]
 ERROR: type NamedTuple has no field b
-[...]
source

VarInfo

Another data structure is VarInfo.

DynamicPPL.VarInfoType
struct VarInfo{Tmeta, Tlogp} <: AbstractVarInfo
+[...]
source

VarInfo

Another data structure is VarInfo.

DynamicPPL.VarInfoType
struct VarInfo{Tmeta, Tlogp} <: AbstractVarInfo
     metadata::Tmeta
     logp::Base.RefValue{Tlogp}
     num_produce::Base.RefValue{Int}
-end

A light wrapper over one or more instances of Metadata. Let vi be an instance of VarInfo. If vi isa VarInfo{<:Metadata}, then only one Metadata instance is used for all the sybmols. VarInfo{<:Metadata} is aliased UntypedVarInfo. If vi isa VarInfo{<:NamedTuple}, then vi.metadata is a NamedTuple that maps each symbol used on the LHS of ~ in the model to its Metadata instance. The latter allows for the type specialization of vi after the first sampling iteration when all the symbols have been observed. VarInfo{<:NamedTuple} is aliased TypedVarInfo.

Note: It is the user's responsibility to ensure that each "symbol" is visited at least once whenever the model is called, regardless of any stochastic branching. Each symbol refers to a Julia variable and can be a hierarchical array of many random variables, e.g. x[1] ~ ... and x[2] ~ ... both have the same symbol x.

source
DynamicPPL.TypedVarInfoType
TypedVarInfo(vi::UntypedVarInfo)

This function finds all the unique syms from the instances of VarName{sym} found in vi.metadata.vns. It then extracts the metadata associated with each symbol from the global vi.metadata field. Finally, a new VarInfo is created with a new metadata as a NamedTuple mapping from symbols to type-stable Metadata instances, one for each symbol.

source

One main characteristic of VarInfo is that samples are stored in a linearized form.

DynamicPPL.link!Function
link!(vi::VarInfo, spl::Sampler)

Transform the values of the random variables sampled by spl in vi from the support of their distributions to the Euclidean space and set their corresponding "trans" flag values to true.

source
DynamicPPL.invlink!Function
invlink!(vi::VarInfo, spl::AbstractSampler)

Transform the values of the random variables sampled by spl in vi from the Euclidean space back to the support of their distributions and sets their corresponding "trans" flag values to false.

source
DynamicPPL.set_flag!Function
set_flag!(vi::VarInfo, vn::VarName, flag::String)

Set vn's value for flag to true in vi.

source
DynamicPPL.unset_flag!Function
unset_flag!(vi::VarInfo, vn::VarName, flag::String)

Set vn's value for flag to false in vi.

source
DynamicPPL.is_flaggedFunction
is_flagged(vi::VarInfo, vn::VarName, flag::String)

Check whether vn has a true value for flag in vi.

source

For Gibbs sampling the following functions were added.

DynamicPPL.setgid!Function
setgid!(vi::VarInfo, gid::Selector, vn::VarName)

Add gid to the set of sampler selectors associated with vn in vi.

source
DynamicPPL.updategid!Function
updategid!(vi::VarInfo, vn::VarName, spl::Sampler)

Set vn's gid to Set([spl.selector]), if vn does not have a sampler selector linked and vn's symbol is in the space of spl.

source

The following functions were used for sequential Monte Carlo methods.

DynamicPPL.get_num_produceFunction
get_num_produce(vi::VarInfo)

Return the num_produce of vi.

source
DynamicPPL.set_num_produce!Function
set_num_produce!(vi::VarInfo, n::Int)

Set the num_produce field of vi to n.

source
DynamicPPL.increment_num_produce!Function
increment_num_produce!(vi::VarInfo)

Add 1 to num_produce in vi.

source
DynamicPPL.reset_num_produce!Function
reset_num_produce!(vi::VarInfo)

Reset the value of num_produce the log of the joint probability of the observed data and parameters sampled in vi to 0.

source
DynamicPPL.setorder!Function
setorder!(vi::VarInfo, vn::VarName, index::Int)

Set the order of vn in vi to index, where order is the number of observe statements run before samplingvn`.

source
DynamicPPL.set_retained_vns_del_by_spl!Function
set_retained_vns_del_by_spl!(vi::VarInfo, spl::Sampler)

Set the "del" flag of variables in vi with order > vi.num_produce[] to true.

source
Base.empty!Function
empty!(meta::Metadata)

Empty the fields of meta.

This is useful when using a sampling algorithm that assumes an empty meta, e.g. SMC.

source

Evaluation Contexts

Internally, both sampling and evaluation of log densities are performed with AbstractPPL.evaluate!!.

AbstractPPL.evaluate!!Function
evaluate!!(model::Model[, rng, varinfo, sampler, context])

Sample from the model using the sampler with random number generator rng and the context, and store the sample and log joint probability in varinfo.

Returns both the return-value of the original model, and the resulting varinfo.

The method resets the log joint probability of varinfo and increases the evaluation number of sampler.

source

The behaviour of a model execution can be changed with evaluation contexts that are passed as additional argument to the model function. Contexts are subtypes of AbstractPPL.AbstractContext.

DynamicPPL.SamplingContextType
SamplingContext(
+end

A light wrapper over one or more instances of Metadata. Let vi be an instance of VarInfo. If vi isa VarInfo{<:Metadata}, then only one Metadata instance is used for all the sybmols. VarInfo{<:Metadata} is aliased UntypedVarInfo. If vi isa VarInfo{<:NamedTuple}, then vi.metadata is a NamedTuple that maps each symbol used on the LHS of ~ in the model to its Metadata instance. The latter allows for the type specialization of vi after the first sampling iteration when all the symbols have been observed. VarInfo{<:NamedTuple} is aliased TypedVarInfo.

Note: It is the user's responsibility to ensure that each "symbol" is visited at least once whenever the model is called, regardless of any stochastic branching. Each symbol refers to a Julia variable and can be a hierarchical array of many random variables, e.g. x[1] ~ ... and x[2] ~ ... both have the same symbol x.

source
DynamicPPL.TypedVarInfoType
TypedVarInfo(vi::UntypedVarInfo)

This function finds all the unique syms from the instances of VarName{sym} found in vi.metadata.vns. It then extracts the metadata associated with each symbol from the global vi.metadata field. Finally, a new VarInfo is created with a new metadata as a NamedTuple mapping from symbols to type-stable Metadata instances, one for each symbol.

source

One main characteristic of VarInfo is that samples are stored in a linearized form.

DynamicPPL.link!Function
link!(vi::VarInfo, spl::Sampler)

Transform the values of the random variables sampled by spl in vi from the support of their distributions to the Euclidean space and set their corresponding "trans" flag values to true.

source
DynamicPPL.invlink!Function
invlink!(vi::VarInfo, spl::AbstractSampler)

Transform the values of the random variables sampled by spl in vi from the Euclidean space back to the support of their distributions and sets their corresponding "trans" flag values to false.

source
DynamicPPL.set_flag!Function
set_flag!(vi::VarInfo, vn::VarName, flag::String)

Set vn's value for flag to true in vi.

source
DynamicPPL.unset_flag!Function
unset_flag!(vi::VarInfo, vn::VarName, flag::String)

Set vn's value for flag to false in vi.

source
DynamicPPL.is_flaggedFunction
is_flagged(vi::VarInfo, vn::VarName, flag::String)

Check whether vn has a true value for flag in vi.

source

For Gibbs sampling the following functions were added.

DynamicPPL.setgid!Function
setgid!(vi::VarInfo, gid::Selector, vn::VarName)

Add gid to the set of sampler selectors associated with vn in vi.

source
DynamicPPL.updategid!Function
updategid!(vi::VarInfo, vn::VarName, spl::Sampler)

Set vn's gid to Set([spl.selector]), if vn does not have a sampler selector linked and vn's symbol is in the space of spl.

source

The following functions were used for sequential Monte Carlo methods.

DynamicPPL.get_num_produceFunction
get_num_produce(vi::VarInfo)

Return the num_produce of vi.

source
DynamicPPL.set_num_produce!Function
set_num_produce!(vi::VarInfo, n::Int)

Set the num_produce field of vi to n.

source
DynamicPPL.increment_num_produce!Function
increment_num_produce!(vi::VarInfo)

Add 1 to num_produce in vi.

source
DynamicPPL.reset_num_produce!Function
reset_num_produce!(vi::VarInfo)

Reset the value of num_produce the log of the joint probability of the observed data and parameters sampled in vi to 0.

source
DynamicPPL.setorder!Function
setorder!(vi::VarInfo, vn::VarName, index::Int)

Set the order of vn in vi to index, where order is the number of observe statements run before samplingvn`.

source
DynamicPPL.set_retained_vns_del_by_spl!Function
set_retained_vns_del_by_spl!(vi::VarInfo, spl::Sampler)

Set the "del" flag of variables in vi with order > vi.num_produce[] to true.

source
Base.empty!Function
empty!(meta::Metadata)

Empty the fields of meta.

This is useful when using a sampling algorithm that assumes an empty meta, e.g. SMC.

source

Evaluation Contexts

Internally, both sampling and evaluation of log densities are performed with AbstractPPL.evaluate!!.

AbstractPPL.evaluate!!Function
evaluate!!(model::Model[, rng, varinfo, sampler, context])

Sample from the model using the sampler with random number generator rng and the context, and store the sample and log joint probability in varinfo.

Returns both the return-value of the original model, and the resulting varinfo.

The method resets the log joint probability of varinfo and increases the evaluation number of sampler.

source

The behaviour of a model execution can be changed with evaluation contexts that are passed as additional argument to the model function. Contexts are subtypes of AbstractPPL.AbstractContext.

DynamicPPL.SamplingContextType
SamplingContext(
         [rng::Random.AbstractRNG=Random.default_rng()],
         [sampler::AbstractSampler=SampleFromPrior()],
         [context::AbstractContext=DefaultContext()],
-)

Create a context that allows you to sample parameters with the sampler when running the model. The context determines how the returned log density is computed when running the model.

See also: DefaultContext, LikelihoodContext, PriorContext

source
DynamicPPL.DefaultContextType
struct DefaultContext <: AbstractContext end

The DefaultContext is used by default to compute log the joint probability of the data and parameters when running the model.

source
DynamicPPL.LikelihoodContextType
struct LikelihoodContext{Tvars} <: AbstractContext
+)

Create a context that allows you to sample parameters with the sampler when running the model. The context determines how the returned log density is computed when running the model.

See also: DefaultContext, LikelihoodContext, PriorContext

source
DynamicPPL.DefaultContextType
struct DefaultContext <: AbstractContext end

The DefaultContext is used by default to compute log the joint probability of the data and parameters when running the model.

source
DynamicPPL.LikelihoodContextType
struct LikelihoodContext{Tvars} <: AbstractContext
     vars::Tvars
-end

The LikelihoodContext enables the computation of the log likelihood of the parameters when running the model. vars can be used to evaluate the log likelihood for specific values of the model's parameters. If vars is nothing, the parameter values inside the VarInfo will be used by default.

source
DynamicPPL.PriorContextType
struct PriorContext{Tvars} <: AbstractContext
+end

The LikelihoodContext enables the computation of the log likelihood of the parameters when running the model. vars can be used to evaluate the log likelihood for specific values of the model's parameters. If vars is nothing, the parameter values inside the VarInfo will be used by default.

source
DynamicPPL.PriorContextType
struct PriorContext{Tvars} <: AbstractContext
     vars::Tvars
-end

The PriorContext enables the computation of the log prior of the parameters vars when running the model.

source
DynamicPPL.MiniBatchContextType
struct MiniBatchContext{Tctx, T} <: AbstractContext
+end

The PriorContext enables the computation of the log prior of the parameters vars when running the model.

source
DynamicPPL.MiniBatchContextType
struct MiniBatchContext{Tctx, T} <: AbstractContext
     context::Tctx
     loglike_scalar::T
-end

The MiniBatchContext enables the computation of log(prior) + s * log(likelihood of a batch) when running the model, where s is the loglike_scalar field, typically equal to the number of data points / batch size. This is useful in batch-based stochastic gradient descent algorithms to be optimizing log(prior) + log(likelihood of all the data points) in the expectation.

source
DynamicPPL.PrefixContextType
PrefixContext{Prefix}(context)

Create a context that allows you to use the wrapped context when running the model and adds the Prefix to all parameters.

This context is useful in nested models to ensure that the names of the parameters are unique.

See also: @submodel

source

Samplers

In DynamicPPL two samplers are defined that are used to initialize unobserved random variables: SampleFromPrior which samples from the prior distribution, and SampleFromUniform which samples from a uniform distribution.

DynamicPPL.SampleFromPriorType
SampleFromPrior

Sampling algorithm that samples unobserved random variables from their prior distribution.

source
DynamicPPL.SampleFromUniformType
SampleFromUniform

Sampling algorithm that samples unobserved random variables from a uniform distribution.

References

Stan reference manual

source

Additionally, a generic sampler for inference is implemented.

DynamicPPL.SamplerType
Sampler{T}

Generic sampler type for inference algorithms of type T in DynamicPPL.

Sampler should implement the AbstractMCMC interface, and in particular AbstractMCMC.step. A default implementation of the initial sampling step is provided that supports resuming sampling from a previous state and setting initial parameter values. It requires to overload loadstate and initialstep for loading previous states and actually performing the initial sampling step, respectively. Additionally, sometimes one might want to implement initialsampler that specifies how the initial parameter values are sampled if they are not provided. By default, values are sampled from the prior.

source

The default implementation of Sampler uses the following unexported functions.

DynamicPPL.initialstepFunction
initialstep(rng, model, sampler, varinfo; kwargs...)

Perform the initial sampling step of the sampler for the model.

The varinfo contains the initial samples, which can be provided by the user or sampled randomly.

source
DynamicPPL.loadstateFunction
loadstate(data)

Load sampler state from data.

By default, data is returned.

source
DynamicPPL.initialsamplerFunction
initialsampler(sampler::Sampler)

Return the sampler that is used for generating the initial parameters when sampling with sampler.

By default, it returns an instance of SampleFromPrior.

source

Model-Internal Functions

DynamicPPL.tilde_assumeFunction
tilde_assume(context::SamplingContext, right, vn, vi)

Handle assumed variables, e.g., x ~ Normal() (where x does occur in the model inputs), accumulate the log probability, and return the sampled value with a context associated with a sampler.

Falls back to

tilde_assume(context.rng, context.context, context.sampler, right, vn, vi)
source
DynamicPPL.dot_tilde_assumeFunction
dot_tilde_assume(context::SamplingContext, right, left, vn, vi)

Handle broadcasted assumed variables, e.g., x .~ MvNormal() (where x does not occur in the model inputs), accumulate the log probability, and return the sampled value for a context associated with a sampler.

Falls back to

dot_tilde_assume(context.rng, context.context, context.sampler, right, left, vn, vi)
source
DynamicPPL.tilde_observeFunction
tilde_observe(context::SamplingContext, right, left, vi)

Handle observed constants with a context associated with a sampler.

Falls back to tilde_observe(context.context, context.sampler, right, left, vi).

source
DynamicPPL.dot_tilde_observeFunction
dot_tilde_observe(context::SamplingContext, right, left, vi)

Handle broadcasted observed constants, e.g., [1.0] .~ MvNormal(), accumulate the log probability, and return the observed value for a context associated with a sampler.

Falls back to dot_tilde_observe(context.context, context.sampler, right, left, vi).

source
+end

The MiniBatchContext enables the computation of log(prior) + s * log(likelihood of a batch) when running the model, where s is the loglike_scalar field, typically equal to the number of data points / batch size. This is useful in batch-based stochastic gradient descent algorithms to be optimizing log(prior) + log(likelihood of all the data points) in the expectation.

source
DynamicPPL.PrefixContextType
PrefixContext{Prefix}(context)

Create a context that allows you to use the wrapped context when running the model and adds the Prefix to all parameters.

This context is useful in nested models to ensure that the names of the parameters are unique.

See also: @submodel

source

Samplers

In DynamicPPL two samplers are defined that are used to initialize unobserved random variables: SampleFromPrior which samples from the prior distribution, and SampleFromUniform which samples from a uniform distribution.

DynamicPPL.SampleFromPriorType
SampleFromPrior

Sampling algorithm that samples unobserved random variables from their prior distribution.

source
DynamicPPL.SampleFromUniformType
SampleFromUniform

Sampling algorithm that samples unobserved random variables from a uniform distribution.

References

Stan reference manual

source

Additionally, a generic sampler for inference is implemented.

DynamicPPL.SamplerType
Sampler{T}

Generic sampler type for inference algorithms of type T in DynamicPPL.

Sampler should implement the AbstractMCMC interface, and in particular AbstractMCMC.step. A default implementation of the initial sampling step is provided that supports resuming sampling from a previous state and setting initial parameter values. It requires to overload loadstate and initialstep for loading previous states and actually performing the initial sampling step, respectively. Additionally, sometimes one might want to implement initialsampler that specifies how the initial parameter values are sampled if they are not provided. By default, values are sampled from the prior.

source

The default implementation of Sampler uses the following unexported functions.

DynamicPPL.initialstepFunction
initialstep(rng, model, sampler, varinfo; kwargs...)

Perform the initial sampling step of the sampler for the model.

The varinfo contains the initial samples, which can be provided by the user or sampled randomly.

source
DynamicPPL.loadstateFunction
loadstate(data)

Load sampler state from data.

By default, data is returned.

source
DynamicPPL.initialsamplerFunction
initialsampler(sampler::Sampler)

Return the sampler that is used for generating the initial parameters when sampling with sampler.

By default, it returns an instance of SampleFromPrior.

source

Model-Internal Functions

DynamicPPL.tilde_assumeFunction
tilde_assume(context::SamplingContext, right, vn, vi)

Handle assumed variables, e.g., x ~ Normal() (where x does occur in the model inputs), accumulate the log probability, and return the sampled value with a context associated with a sampler.

Falls back to

tilde_assume(context.rng, context.context, context.sampler, right, vn, vi)
source
DynamicPPL.dot_tilde_assumeFunction
dot_tilde_assume(context::SamplingContext, right, left, vn, vi)

Handle broadcasted assumed variables, e.g., x .~ MvNormal() (where x does not occur in the model inputs), accumulate the log probability, and return the sampled value for a context associated with a sampler.

Falls back to

dot_tilde_assume(context.rng, context.context, context.sampler, right, left, vn, vi)
source
DynamicPPL.tilde_observeFunction
tilde_observe(context::SamplingContext, right, left, vi)

Handle observed constants with a context associated with a sampler.

Falls back to tilde_observe(context.context, context.sampler, right, left, vi).

source
DynamicPPL.dot_tilde_observeFunction
dot_tilde_observe(context::SamplingContext, right, left, vi)

Handle broadcasted observed constants, e.g., [1.0] .~ MvNormal(), accumulate the log probability, and return the observed value for a context associated with a sampler.

Falls back to dot_tilde_observe(context.context, context.sampler, right, left, vi).

source
diff --git a/previews/PR571/index.html b/previews/PR571/index.html index 0870d4618..91d7095ec 100644 --- a/previews/PR571/index.html +++ b/previews/PR571/index.html @@ -1,2 +1,2 @@ -Home · DynamicPPL

DynamicPPL.jl

A domain-specific language and backend for probabilistic programming languages, used by Turing.jl.

+Home · DynamicPPL

DynamicPPL.jl

A domain-specific language and backend for probabilistic programming languages, used by Turing.jl.

diff --git a/previews/PR571/tutorials/prob-interface/index.html b/previews/PR571/tutorials/prob-interface/index.html index 225b258de..b15e31f3d 100644 --- a/previews/PR571/tutorials/prob-interface/index.html +++ b/previews/PR571/tutorials/prob-interface/index.html @@ -49,4 +49,4 @@ return loss end -cross_val(dataset)
-212760.30282411768
+cross_val(dataset)
-212760.30282411768