Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About demo #78

Open
Einkazusa opened this issue Jul 11, 2023 · 4 comments
Open

About demo #78

Einkazusa opened this issue Jul 11, 2023 · 4 comments

Comments

@Einkazusa
Copy link

Einkazusa commented Jul 11, 2023

Hello! First of all, thank you very much for your research on EPro-PnP.
I would like to incorporate the EPro-PnP layer into my own pose estimation network, and I saw some relevant instructions in the fit_identity.ipynb file in the demo folder. Thank you very much for your help! I have some uncertainties about this demo and I hope to get your answers:

  1. In the example, you used "out_pose = EProPnP(MLP(in_pose))". Is the purpose of the MLP just to generate dummy inputs such as x3d, x2d, and w2d? Additionally, my network predicts 2D coordinates of key points in RGB images, so is x3d based on real data from the training set? Furthermore, I believe that different networks may have different definitions for weights. Is there a way to convert them to the w2d format used in EPro-PnP?
  2. The variable "out_pose" is used in both model.forward_train and loss calculation. Does "out_pose" represent the real pose data from the training set?
    I would greatly appreciate your answers!
@Einkazusa Einkazusa changed the title 关于demo About demo Jul 13, 2023
@Lakonik
Copy link
Collaborator

Lakonik commented Jul 23, 2023

Hi! Thank you for your interest in our work.

  1. Yeah these are dummy inputs (only to showcase the basic usage of the code). If you have the predefined keypoints then x3d should be the coordinates of those keypoints. For w2d, the only hard requirement is to apply softmax and global scaling after some initial predictions (the global scaling should be handled with care as explained in the updated arxiv 2023 preprint).

  2. Yes, out_pose is the g.t. pose. During training you need to add g.t. as a candidate proposal to stabilize Monte Carlo sampling.

@Einkazusa
Copy link
Author

Einkazusa commented Aug 12, 2023

Thank you for your response!

In addition, I have some further questions about a segment of code in the initialization of the demo:

# Here we use static weight_scale because the data noise is homoscedastic
self.log_weight_scale = nn.Parameter(torch.zeros(2))

Later in the code, there are parts related to self.log_weight_scale:

w2d = (w2d.log_softmax(dim=-2) + self.log_weight_scale).exp()
norm_factor = model.log_weight_scale.detach().exp().mean()

In this context, self.log_weight_scale is not updated through norm_factor, but it is updated through w2d. In practical usage, should .detach() be removed from self.log_weight_scale to compute the loss using norm_factor for updating instead of relying on w2d?

Furthermore, during the training process, there is a situation where loss_mc becomes smaller than 0. Is this considered normal?

@Lakonik
Copy link
Collaborator

Lakonik commented Aug 18, 2023

The .detach() should be kept. norm_factor is only meant to normalize the loss weight, and has no gradients during training. loss_mc below 0 is totally normal.

@Einkazusa
Copy link
Author

Thank you for your response again!
One more thing, when I train the network using EPro-PnP layer, a warning has appeare:

.conda\envs\EPro-PnP\lib\site-packages\torch\distributions\distribution.py:271: UserWarning: <class 'epropnp.distributions.AngularCentralGaussian'> does not define `support` to enable sample validation. Please initialize the distribution with `validate_args=False` to turn off validation.
  warnings.warn(f'{self.__class__} does not define `support` to enable ' +

Will this affect the performance of the network?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants