You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm trying to run a sample command from the README file but I'm getting an error. I tried running example_chat_completion.py et example_text_completion.py but still getting an error.
For example I tried to run the following command: $NGPUS PYTHONPATH=$(git rev-parse --show-toplevel) torchrun \ --nproc_per_node=$NGPUS \ models/scripts/example_chat_completion.py $CHECKPOINT_DIR \ --model_parallel_size $NGPUS
But I get the following error:
File "models/scripts/example_chat_completion.py", line 16, in <module>
Traceback (most recent call last):
File "models/scripts/example_chat_completion.py", line 16, in <module>
from models.llama3.api.datatypes import (
File "/home/RAF/local/llama-models/models/llama3/api/__init__.py", line 9, in <module>
from models.llama3.api.datatypes import (
from .chat_format import * # noqa
File "/home/RAF/local/llama-models/models/llama3/api/chat_format.py", line 13, in <module>
File "/home/RAF/local/llama-models/models/llama3/api/__init__.py", line 9, in <module>
from .tokenizer import Tokenizer
File "/home/RAF/local/llama-models/models/llama3/api/tokenizer.py", line 14, in <module>
from typing import ( from .chat_format import * # noqa
File "/home/RAF/local/llama-models/models/llama3/api/chat_format.py", line 13, in <module>
ImportError from .tokenizer import Tokenizer
File "/home/RAF/local/llama-models/models/llama3/api/tokenizer.py", line 14, in <module>
: cannot import name 'Literal' from 'typing' (/usr/lib64/python3.7/typing.py) from typing import (
ImportError: cannot import name 'Literal' from 'typing' (/usr/lib64/python3.7/typing.py)
Thanks for your help.
Note: The documentation suffers from having clear explanations, are you planning an update with concrete examples, tested and which work?
Model: meta-llama/Meta-Llama-3.1-8B
Use via huggingface? : no
Operating system : Fedora Linux
GPU VRAM : N/A (CPU utilisé)
Number of GPUs : N/A (CPU utilisé)
The text was updated successfully, but these errors were encountered:
Hi,
I'm trying to run a sample command from the README file but I'm getting an error. I tried running
example_chat_completion.py
etexample_text_completion.py
but still getting an error.For example I tried to run the following command:
$NGPUS PYTHONPATH=$(git rev-parse --show-toplevel) torchrun \ --nproc_per_node=$NGPUS \ models/scripts/example_chat_completion.py $CHECKPOINT_DIR \ --model_parallel_size $NGPUS
But I get the following error:
Thanks for your help.
Note: The documentation suffers from having clear explanations, are you planning an update with concrete examples, tested and which work?
Model: meta-llama/Meta-Llama-3.1-8B
Use via huggingface? : no
Operating system : Fedora Linux
GPU VRAM : N/A (CPU utilisé)
Number of GPUs : N/A (CPU utilisé)
The text was updated successfully, but these errors were encountered: