Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-tuning on MM Grounding DINO needs tree pre-trained models?微调mm-g-dino需要三个预训练模型? #11932

Open
zxt6174 opened this issue Sep 2, 2024 · 1 comment
Assignees

Comments

@zxt6174
Copy link

zxt6174 commented Sep 2, 2024

To train a mm_grodunding_dino ,we need to load both BERT and Swin two pre-trained models。
To fine-tune a mm_grounding_dino using my dataset, I need to load a pre-trained MM_Grounding_DINO and the config file ,which means I need to load BERT and Swin again.There are three models in total that I should load.
Is this the right model fine-tuning process? Or maybe my method is wrong, how can I change it?

训练一个mm_grounding_dino需要调用BERT和Swin。
在自己的数据集上微调一个mm_grounding_dino,需要调用预训练的mm_g_dino权重和它的配置文件,但它的配置文件里需要调用BERT和Swin。一共需要调用三个预训练模型。
这是正确的微调流程吗?一定要调用三个模型吗?不能只调用一个mm_g_dino吗?还是我的方法出错了?

@simranbajaj06
Copy link

simranbajaj06 commented Sep 13, 2024

what are the end to end steps you follow to finetune MM_Grounding_DINO ?
i finetune the MM_Grounding_DINO on 12 epochs and on 100 epochs as well , but its not giving the results even when i visualize the predictions there is no single bbox predicted on any image @zxt6174 @Czm369

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants