Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An overview of "sharp bits" of RxInfer #60

Open
svilupp opened this issue Jun 18, 2022 · 10 comments
Open

An overview of "sharp bits" of RxInfer #60

svilupp opened this issue Jun 18, 2022 · 10 comments
Labels
documentation Improvements or additions to documentation

Comments

@svilupp
Copy link

svilupp commented Jun 18, 2022

This is inspired by JAX Sharp Bits

It would be awesome to have a documentation page describing the most unexpected differences to what a normal Julia user (or a Turing.jl user) would expect, eg,

  • what the allowed operations are within a model (you cannot use functions // you cannot use sum on r a random variable array // you cannot use + with more than 2 arguments)
  • what are the initialization needs and the minimum required
  • troubleshooting frequently occurring problems
    • what should be in your function return statement (because you cannot just execute the model anyway. my guess: it's for the quantities that you want to manually subscribe to)
    • if your fit is bad at the very beginning of the time series, try increasing iterations
    • if you have BFE NaNs check that there isn't any input multiplied by a MvNormal with only zeros; if there is add, add tiny to the first column
    • if you get MethodError with q_ m_, try XYZ first
@bvdmitri bvdmitri added the documentation Improvements or additions to documentation label Jul 1, 2022
@SebastianCallh
Copy link

Hi! I am looking into ReactiveMP for some state space modelling. It looks like a very cool and useful package, however, not being able to call functions in models is unfortunately kind of a deal breaker for me. My use case is I would like to call a function that returns transition/observation matrices and use those in the inference.

Is this a something you are looking to make possible in the future or is this a hard limitation of ReactiveML?

@bvdmitri
Copy link
Member

Hey @SebastianCallh ,

Depends on what exactly you are trying to achieve. If you just want to generate a state-transition matrix that does depend on some "out-of-model" process just make it datavar(Matrix) and simply pass it together with your observation. There is a good example of online filtering in a hierarchical gaussian filter model: https://biaslab.github.io/ReactiveMP.jl/stable/examples/hierarchical_gaussian_filter/. In this demo we set our priors to be datavar's such that we can continuously change/update them during the inference procedure. You can set up something similar for state-transition matrix as well.

If you want to generate those matrices depending on some random variable in your model than it is not supported yet, but we are working hard to make it work. The main difficulty here is that ReactiveMP.jl is aimed to support fast real-time inference in state-space models and making inference fast for any arbitrary function is quite a difficult challenge.

@SebastianCallh
Copy link

Thank you for that insight @bvdmitri . It makes sense to keep the scope limited.

I'm actually working on a library for structural time series primitives and it looks like ReactiveMP is a wonderful inference engine that I could pair it with. I have used Turing with Kalman filter to fit models before but it is quite slow.

There are some examples on how I currently construct and use models here https://github.com/SebastianCallh/STSlib.jl#basics.
Do you think this would pair well with ReactiveMP? I was thinking I would call the STS model each time step for states and observations and simply pass them to RectiveMP for inference but I couldn't figure out how to call my functions (operating on mean vectors and covariance matrices) with RandomVariable objects.

@albertpod albertpod changed the title An overview of "sharp bits" of ReactiveMP An overview of "sharp bits" of RxInfer Feb 15, 2023
@albertpod albertpod transferred this issue from ReactiveBayes/ReactiveMP.jl Feb 15, 2023
@bartvanerp
Copy link
Member

Hi @SebastianCallh!

Our sincere apologies for the (extremely) late reply to your question. It just escaped our attention. Although your question seems a bit irrelated to the issue, I am happy to answer it here. From your description I get the feeling that you are looking for the following example in our docs: https://biaslab.github.io/RxInfer.jl/stable/examples/Kalman%20filter%20with%20LSTM%20network%20driven%20dynamic/#Generate-data. It basically describes a Kalman Filter whose transition matrices are modeled by a neural network (here powered by Flux.jl). The code is a bit rough there, because the neural network is trained simultaneously. If your network has already been trained, then you could make use of the more convenient rxinference function, which processes data sequentially in an online manner. Especially our @autoupdates macro might describe what you are looking for.

I hope this answers/solves your problem. If you would like to dive in a bit more detail regarding your implementation, feel free to open a separate issue in which we can discuss your implementation as we think this line of research is very interesting!

@bartvanerp
Copy link
Member

bartvanerp commented Feb 27, 2023

Further update on the sharp bits section:

  • Variable relations described by arbitrary functions can now be used inside the model specification language. As inference in these cases is not tractable, we need to resort to some approximation method. For this purpose we use CVI, which needs to be specified using the @meta macro. See this notebook for an example.
  • Multi-argument +, - and * operations are now available. They can also be joined, e.g. y ~ a + b * c.
  • The sum operation is currently not yet available, because it depends on the dependence assumption between the variables. However, an issue is filed here. @wouterwln is working hard on improving GraphPPL.jl to make it better and to catch issues like this.

Tasks:

If someone encounters certain limitations of our tool, we highly encourage you to open an issue, such that we know what pitfalls people are experiencing and how we can help improving our package :)

@SebastianCallh
Copy link

@bartvanerp Thank you so much for your polite response. I was a bit too eager when posting my question here, and completely agree it belongs in a separate issue. I will study the example you linked!

@bvdmitri
Copy link
Member

bvdmitri commented Oct 5, 2023

@mhidalgoaraya

@albertpod
Copy link
Member

Hi @mhidalgoaraya! I see this one has "in progress" status. Are we working on it?

@albertpod
Copy link
Member

ping @mhidalgoaraya

@mhidalgoaraya
Copy link
Contributor

@albertpod, it is the first time that I see this. It seems that it was assigned to me. I can take care of it. Can we discuss it next week and you get me up to speed. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
Status: 🤔 Ideas
Development

No branches or pull requests

6 participants