Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to fine tune LLaMA 3 in Google Colab (Pro)? #18

Open
yukiarimo opened this issue Apr 21, 2024 · 1 comment
Open

How to fine tune LLaMA 3 in Google Colab (Pro)? #18

yukiarimo opened this issue Apr 21, 2024 · 1 comment

Comments

@yukiarimo
Copy link

I have a JSONL dataset like this:

{"text": "This is raw text in 2048 tokens I want to feed in"},
{"text": "This is next line, tokens are also 2048"}

It would be nice to fine-tune in 4, 8, or 16-bit LoRA and then just merge as before!

@ishandhanani
Copy link
Contributor

Hi @yukiarimo - thanks for opening this issue! Creating a lighter version of the llama3 finetune is on our roadmap. It would be extremely helpful if you were able to edit the notebook and add a quantization configuration which would work on Colab's T4 GPUs!

Let me know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants