Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error while inferencing the mistral LLM #1309

Open
himanshushukla12 opened this issue Aug 13, 2024 · 0 comments
Open

error while inferencing the mistral LLM #1309

himanshushukla12 opened this issue Aug 13, 2024 · 0 comments

Comments

@himanshushukla12
Copy link

himanshushukla12 commented Aug 13, 2024

Facing problem while doing the inferencing of mistral LLM example

Olive/examples/mistral$ python mistral.py --config mistral_fp16_optimize.json --inference --prompt "Language models are very useful"        
/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/huggingface_hub/file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Traceback (most recent call last):
  File "/home/z004x2xz/WorkAssignedByMatt/Olive/examples/mistral/mistral.py", line 130, in <module>
    main()
  File "/home/z004x2xz/WorkAssignedByMatt/Olive/examples/mistral/mistral.py", line 122, in main
    output = inference(args.model_id, optimized_model_dir, ep, args.prompt, args.max_length)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/z004x2xz/WorkAssignedByMatt/Olive/examples/mistral/mistral.py", line 74, in inference
    tokenizer = AutoTokenizer.from_pretrained(model_id)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 814, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2029, in from_pretrained
    return cls._from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2261, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 124, in __init__
    super().__init__(
  File "/home/z004x2xz/WorkAssignedByMatt/Olive/venv3.11/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 111, in __init__
    fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3

```

**Other information**
 - OS: ubuntu 22.04
 - Olive version: 0.7.0
 - onnxruntime-genai: 0.3.0
 - onnxruntime-gpu: 1.18.1
 - Python 3.11.9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant