You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @yukiarimo - thanks for opening this issue! Creating a lighter version of the llama3 finetune is on our roadmap. It would be extremely helpful if you were able to edit the notebook and add a quantization configuration which would work on Colab's T4 GPUs!
I have a JSONL dataset like this:
It would be nice to fine-tune in 4, 8, or 16-bit LoRA and then just merge as before!
The text was updated successfully, but these errors were encountered: