problem when using local mistral models #7214
Unanswered
Asma-droid
asked this question in
Questions
Replies: 1 comment
-
Related to this issue: #7277 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'am trying to use local mistral models as follows :
i have got this error
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
rest-api_1 | return await get_asynclib().run_sync_in_worker_thread(
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
rest-api_1 | return await future
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
rest-api_1 | result = context.run(func, *args)
rest-api_1 | File "/code/src/main.py", line 219, in ask_rag_pipeline
rest-api_1 | result = rag_pipeline.run(
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/haystack/core/pipeline/pipeline.py", line 671, in run
rest-api_1 | self.warm_up()
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/haystack/core/pipeline/pipeline.py", line 573, in warm_up
rest-api_1 | self.graph.nodes[node]["instance"].warm_up()
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/haystack/components/generators/hugging_face_local.py", line 145, in warm_up
rest-api_1 | self.pipeline = pipeline(**self.huggingface_pipeline_kwargs)
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/init.py", line 905, in pipeline
rest-api_1 | framework, model = infer_framework_load_model(
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 279, in infer_framework_load_model
rest-api_1 | model = model_class.from_pretrained(model, **kwargs)
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained
rest-api_1 | return model_class.from_pretrained(
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2960, in from_pretrained
rest-api_1 | quantization_config, kwargs = BitsAndBytesConfig.from_dict(
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/transformers/utils/quantization_config.py", line 90, in from_dict
rest-api_1 | config = cls(**config_dict)
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/transformers/utils/quantization_config.py", line 259, in init
rest-api_1 | self.bnb_4bit_compute_dtype = getattr(torch, bnb_4bit_compute_dtype)
rest-api_1 | File "/usr/local/lib/python3.10/site-packages/torch/init.py", line 1938, in getattr
rest-api_1 | raise AttributeError(f"module '{name}' has no attribute '{name}'")
in parallel i have tried
and it works fine
any help please
Beta Was this translation helpful? Give feedback.
All reactions