Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SO3_fft_real and SO3_ifft_real do not seem to be inverses of each other? #43

Open
suhaslohit opened this issue Jul 31, 2020 · 12 comments
Open

Comments

@suhaslohit
Copy link

Hello

I did a simple experiment where I created a random signal x on SO(3) (I tried multiple bandwidths = 20,30,40). Then I computed the FFT using

x_fft = SO3_fft_real.apply(x)

Finally, I computed the IFFT using

x_hat = SO3_ifft_real.apply(x_fft)

I found that x and x_hat are very different from each other and appear to have no patterns that I can identify. For example, one is not a scaled version of the other.

Any help here is appreciated, It's possible I am just using the functions incorrectly.

Thanks
Suhas

@mariogeiger
Copy link
Collaborator

A random signal contains a lot of high frequencies but these high frequencies cannot be preserved by the transformation.
What I usually did to make a test is to take the x_hat that you got and do the experiment again with him

@suhaslohit
Copy link
Author

suhaslohit commented Aug 3, 2020

Hi Mario

Thanks for the reply, you were exactly right. Redoing the experiment with x_hat does show that the transformations are invertible.

(I also found that if I start with a signal x with zeros everywhere except one location where it is 1.0, the same problems persist, again explained by the fact that there are too many high frequencies.)

However, I obviously do not observe these with when these signals are defined on the real line (rather than the sphere) and I use the usual 1D FFT and 1D IFFT. I was wondering if you think it is this particular implementation of the FFT on SO(3) that leads to this behavior.

Another thing I noticed which I think may be related to the same problem and could be useful to others as well. (I can post it as a separate issue if you think it is a different topic.)

I took a spherical MNIST image using gen_data.py, x. I created x_hat using S2_fft_real() followed by S2_ifft_real() so as to avoid any problems with the high frequencies. (x_hat is actually quite close to x).

Then I computed the spherical auto-correlation using

x_hat_fft = s2_fft_real.apply(x_hat, 10)
auto_corr_fft = s2_mm(x_hat_fft, x_hat_fft)
auto_corr = SO3_ifft_real(auto_corr_fft)

However, when I look at auto_corr, there seem to be some issues:

  1. As the two signals are the same, they are already aligned and we expect the highest correlation to be exactly at alpha = beta = gamma = 0. And this seems to be indeed true or very close to it.

  2. At alpha = beta = gamma = 0, the value of the auto_corr should be equal to the square of the L2 norm of x_hat (right?). However, the values are 4 orders of magnitude smaller and it is not a constant scale factor that applies to different runs using different x_hat's.

I also tried to correlate x_hat with a Dirac function at 0,0 defined on the sphere. Again, the spherical correlation results are not interpretable as far as I can see.

This has stumped me and I am hoping for some resolution.

Thank you so much again for the great support even now after such a long time since publication
Suhas

@mariogeiger
Copy link
Collaborator

It should work... and you share the code please?

@suhaslohit
Copy link
Author

suhaslohit commented Aug 3, 2020

This is the test code I am using. For the particular image I am testing I get torch.norm(imagehat)**2 = 241.2491 and auto_corr[0,0,0,0,0] = 0.0377. As an aside, [0,0,0] is indeed very close to where auto_corr attains its maximum.

import numpy as np
import torch
import torch.nn as nn
from s2cnn import SO3Convolution
from s2cnn import S2Convolution
from s2cnn import so3_integrate
from s2cnn import so3_near_identity_grid
from s2cnn import s2_near_identity_grid

from s2cnn.soft.s2_fft import S2_fft_real, S2_ifft_real
from s2cnn.soft.so3_fft import SO3_fft_real, SO3_ifft_real
from s2cnn.s2_mm import s2_mm

import gzip
import pickle
import ipdb
import scipy.io

MNIST_PATH = "s2_mnist_train_nr_test_nr.gz"

with gzip.open(MNIST_PATH, 'rb') as f:
	dataset = pickle.load(f)

train_data = torch.from_numpy(dataset["train"]["images"][:, None, :, :].astype(np.float32))

### Get one image for testing and convert to batched version
image = train_data[0,0,:,:].unsqueeze(0).unsqueeze(0)
image = image/255.0

### Removing high frequencies if any (following what Mario Geiger suggested on Github)
image_fft = S2_fft_real.apply(image)
imagehat = S2_ifft_real.apply(image_fft)

### Try auto-correlation with the reconstructed image, auto_corr should be equal to torch.norm(imagehat)**2 at alpha = beta = gamma = 0.0, where it should also attain its maximum

imagehat_fft = S2_fft_real.apply(imagehat,10)
auto_corr_fft = s2_mm(imagehat_fft, imagehat_fft)
auto_corr = SO3_ifft_real.apply(auto_corr_fft)

print(torch.norm(imagehat)**2)
print(auto_corr[0,0,0,0,0])```

@mariogeiger
Copy link
Collaborator

I don't have s2cnn installed anymore so I cannot check myself but everything sounds good so I don't clearly understand why it does not work. Did you check that the shape of the tensor provided to s2_mm is good? Reading again it's docstring made me wonder if the two argument are really symmetrical (it's a pity that it was written in that non symmetric way).

@mariogeiger
Copy link
Collaborator

Also try to compute imagehat also using b_out=10, maybe you still have some high frequencies that makes the mismatch

@mariogeiger
Copy link
Collaborator

Another think, you asked why Fourier transform are exact on the plane but not on the sphere. I think it's because there are no regular tiling of the sphere.

@suhaslohit
Copy link
Author

Hi Mario

Thanks for the suggestions.

  1. Yes, the inputs sizes for the s2_mm seem fine as per the documentation (there are no syntax errors either, obviously).
  2. Could you explain by what you mean by s2_mm is not symmetric?
  3. Do you have any test codes that you used to test correctness of the SO(3) and S2 convolutions that you can share?

Nothing else I do has helped so far.

Thank you again
Suhas

@suhaslohit
Copy link
Author

It would also be great if you could point me to a clear formula for what s2_mm is accomplishing. I can try to modify it if that is the one causing trouble.

The equivariance plots provided in the repository, provided as an example, suggest that everything should work as expected. Just not the code I am trying to run, which is really weird.

@mariogeiger
Copy link
Collaborator

https://github.com/jonas-koehler/s2cnn/blob/master/s2cnn/s2_mm.py#L30-L58

in pseudo code, it does the following

x and y are lists of tensors indexed by L from 0 to Lmax

for x_L, y_L for zip(x, y):
    # x_L is a tensor of shape [2L+1, batch, i]
    # y_L is a tensor of shape [2L+1, i, j]

    output.append( make a (2L+1)x(2L+1) matrix and contract the i index )

return output

@tscohen
Copy link
Collaborator

tscohen commented Aug 27, 2020

"However, I obviously do not observe these with when these signals are defined on the real line (rather than the sphere) and I use the usual 1D FFT and 1D IFFT. I was wondering if you think it is this particular implementation of the FFT on SO(3) that leads to this behavior."
This is ultimately due to the fact that we don't have a regular grid on the sphere or SO(3). The S2 and SO3 FFT/IFFT maps are only one-sided inverses. However IIRC there should be the option to compute the FFT up to a higher order, in which case you will get an increasingly accurate reconstruction. This page has some good info: https://www-user.tu-chemnitz.de/~potts/nfft/. Have a look at the papers by that group as well.

"At alpha = beta = gamma = 0, the value of the auto_corr should be equal to the square of the L2 norm of x_hat (right?). However, the values are 4 orders of magnitude smaller and it is not a constant scale factor that applies to different runs using different x_hat's."
This depends on how you normalize the basis functions. You could make them unit norm (int_G f_ij(g)^2 dg = 1), or make the matrices f(g) orthogonal, or something else. I don't quite remember what we choose to do, but it's probably something that makes gradients flow nicely and avoids numerical instabilities. It might even be that this scaling depends on frequency l. Not entirely sure what else could explain the lack of a single scale factor. Could you check if s2_mm is bilinear and fft / ifft are linear?

"It would also be great if you could point me to a clear formula for what s2_mm is accomplishing."
It is doing a batched block-wise matrix multiplication, with the blocks of dim 2l+1 for l=0...N, being flattened and stacked into a big vector for the input and output. Not sure if a formula would be simpler than the code itself: https://github.com/jonas-koehler/s2cnn/blob/master/s2cnn/s2_mm.py. It is made for applying a bunch of filters to a minibatch of data, so only one of the inputs has a batch dim.

@slohit
Copy link

slohit commented Sep 8, 2020

Hi Taco

Thank you very much for these clarifications.

Yes, s2_mm is bilinear, and both S2_fft and SO3_ifft are linear.

I did try to play with the normalization factors a little, but could not get it to work. It appears that for the FFT functions, S3.quadrature weights are multiplied to the Wigner matrices. For the IFFT functions, the Wigner d-matrices are multiplied by 2L+1. I tried removing these weights, but that does not help.

I don't fully follow why these weights are present, I feel that is one possible reason why the results are not as expected.

Suhas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants