You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently using CodeLlama-7B on an RTX 3090 24GB GPU, and I have a question regarding the relationship between context length and VRAM usage. According to the model documentation, the context length of CodeLlama-7B is 16,384 tokens.
I loaded the model using Hugging Face with 8-bit precision as follows:
I then tested the model with different input lengths. For a 3000-token input, the GPU VRAM usage was 16GB. However, when I provided a 6000-token input, the GPU VRAM spiked to 22GB. My primary concern is understanding the relationship between context length and VRAM usage.
I am currently using CodeLlama-7B on an RTX 3090 24GB GPU, and I have a question regarding the relationship between context length and VRAM usage. According to the model documentation, the context length of CodeLlama-7B is 16,384 tokens.
I loaded the model using Hugging Face with 8-bit precision as follows:
I then tested the model with different input lengths. For a 3000-token input, the GPU VRAM usage was 16GB. However, when I provided a 6000-token input, the GPU VRAM spiked to 22GB. My primary concern is understanding the relationship between context length and VRAM usage.
Code for Reference:
Questions:
Any clarification on these matters would be greatly appreciated. Thank you!
The text was updated successfully, but these errors were encountered: