Skip to content

Temporal Difference Variational Auto-Encoder (TD-VAE) (Implementation in PyTorch)

Notifications You must be signed in to change notification settings

WayScience/MNIST_TD-VAE

 
 

Repository files navigation

Temporal Difference Variational Auto-Encoder (TD-VAE)

(Implemented using PyTorch)

This is an implementation of the TD-VAE introduced in this ICLR 2019 paper. TD-VAE is designed to have the following three features:

  1. It learns a compressed state representation of observations and makes predictions on the state level.
  2. Based on observations, it learns a belief state that contains all the information required to make predictions about the future.
  3. It learns to make predictions multiple steps in the future (jumpy predictions) directly instead of making predictions step by step.

Here, based on the information disclosed in the paper, we reproduce the experiment about moving MNIST digits. In this experiment, a sequence of a MNIST digit moving to the left or the right direction is presented to the model. The model predicts how the digit moves in subsequent steps. After training the model, we can feed a sequence of digits into the model and see how well it can predict the future.

Here is the result. Figure

About

Temporal Difference Variational Auto-Encoder (TD-VAE) (Implementation in PyTorch)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%