Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update advanced.jmd #414

Closed
wants to merge 4 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 54 additions & 1 deletion tutorials/docs-09-using-turing-advanced/advanced.jmd
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ prior of your model but only when computing the log likelihood and the log joint
should [check the type of the internal variable `__context_`](https://github.com/TuringLang/DynamicPPL.jl/issues/154)
such as

```{julia; eval=false}
```julia; eval=false
if DynamicPPL.leafcontext(__context__) !== Turing.PriorContext()
Turing.@addlogprob! myloglikelihood(x, μ)
end
Expand Down Expand Up @@ -176,6 +176,59 @@ gdemo(x) = Turing.Model(gdemo, (; x))
model = gdemo([1.5, 2.0])
```

### Reparametrization and generated_quantities

Often, the most natural parameterization for a model is not the most computationally feasible. Consider the following
(efficiently reparametrized) implementation of Neal's funnel [(Neal, 2003)](https://arxiv.org/abs/physics/0009028):

```julia; eval=false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be worth making eval=true here? Would potentially be nice to have the outputs of this example if it doesn't take too long to run:)

@model function Neal()
# Raw draws
y_raw ~ Normal(0, 1)
x_raw ~ arraydist([Normal(0, 1) for i in 1:9])

# Transform:
y = 3 * y_raw
x = exp.(y ./ 2) .* x_raw

# Return:
return [x; y]
end
```

In this case, the random variables exposed in the chain (`x_raw`, `y_raw`) are not in a helpful form — what we're after is the deterministically transformed variables `x, y`.

More generally, there are often quantities in our models that we might be interested in viewing, but which are not explicitly present in our chain.

We can generate draws from these variables — in this case, `x, y` — by adding them as a return statement to the model, and then calling `generated_quantities(model, chain)`. Calling this function outputs an array of values specified in the return statement of the model.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a link to the docs for generated_quantities?


For example, in the above reparametrization, we sample from our model:

```julia; eval=false
chain = sample(Neal(), NUTS(), 1000)
```

and then call:

```julia; eval=false
generated_quantities(Neal(), chain)
```

to return an array for each posterior sample containing `x1, x2, ... x9, y`.

In this case, it might be useful to reorganize our output into a matrix for plotting:

```julia; eval=false
reparam_chain = reduce(hcat, generated_quantities(Neal(), chain))'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm personally in favour of just keeping it as

Suggested change
reparam_chain = reduce(hcat, generated_quantities(Neal(), chain))'
reparam_chain = reduce(hcat, generated_quantities(Neal(), chain))

since this is a) what leads to the most efficient access, and b) (because of (a)) it's more commonly used.

The drawback is that MCMChains does not follow this convention 😕 So because of this, I'm happy to accept the current version, if you prefer it 👍

```

Where we can recover a vector of our samples as follows:

```julia; eval=false
x1_samples = reparam_chain[:, 1]
y_samples = reparam_chain[:, 10]
```

## Task Copying

Turing [copies](https://github.com/JuliaLang/julia/issues/4085) Julia tasks to
Expand Down
Loading