Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Vicuna LLM is not frozen during pretraining #24

Open
ZJULiHongxin opened this issue May 9, 2024 · 0 comments
Open

The Vicuna LLM is not frozen during pretraining #24

ZJULiHongxin opened this issue May 9, 2024 · 0 comments

Comments

@ZJULiHongxin
Copy link

Hello! Thank you for open-sourcing this great work. @yaoyuanTHU @guozonghao96 @xrorrim
I tried pretraining and fine-tuning LLaVA-UHD but found a small error.

I calculated the number of trainable parameters of the LLM using this line of code:

    if model_args.freeze_backbone:
        model.model.requires_grad_(False)
    trainable_params_info["LLM_backbone"] = {
        "#params": sum(p.numel() for p in model.model.parameters()),
        "#trainable_params": sum(p.numel() for p in model.model.parameters()if p.requires_grad)
    }

When pretraining using pretrain.sh, the number of trainable parameters of the LLM is not 0 as stated in your paper "Stage 1: Pretraining details. During this stage, only the perceiver resampler is tuned".

Could you please clarify this small error? Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant