Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems with training #1

Open
Luo-Ji-X opened this issue Nov 17, 2022 · 3 comments
Open

Problems with training #1

Luo-Ji-X opened this issue Nov 17, 2022 · 3 comments

Comments

@Luo-Ji-X
Copy link

Dear the authors:
I downloaded your code and trained it with 2 GTX1080Ti according to your readme document. But the training effect is much worse. Is there anything wrong with me? Attached is the training results.

           0        18        36        54  ...    144    162    180      mean

NM#5-6 0.937374 0.935354 0.948000 0.942424 ... 0.950 0.948 0.912 0.939596
BG#1-2 0.809091 0.829293 0.826263 0.835714 ... 0.816 0.840 0.787 0.811113
CL#1-2 0.780000 0.810000 0.819000 0.806122 ... 0.827 0.817 0.800 0.808069

@exitudio
Copy link
Owner

exitudio commented Dec 2, 2022

It's high variance due to the small data regime and noise from pose estimation. Especially, the "Coat" condition has the highest variance.
So we run 8 experiments for each architecture and report the best results.
We also provide the weights of the best result. You can find it here

@Tamako888
Copy link

It's high variance due to the small data regime and noise from pose estimation. Especially, the "Coat" condition has the highest variance. So we run 8 experiments for each architecture and report the best results. We also provide the weights of the best result. You can find it here

Did you use hyperparameter as same as the common.py setting default when running experiments, I really confused about the result I trained on a V-100. The mean accuracy is NM#5-6: 0.8499, BG#1-2: 0.7028, CL#1-2:0.6968

@exitudio
Copy link
Owner

exitudio commented Jun 5, 2024

Yes. We use the same default parameters as common.py. For BG and CL, the evaluation has very high variance but NM should not see much difference.
Can you run an evaluation on our pre-train model? Does it get a similar result as reported?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants