Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to generate embedding #15

Open
Bobpick opened this issue May 10, 2024 · 4 comments
Open

failed to generate embedding #15

Bobpick opened this issue May 10, 2024 · 4 comments

Comments

@Bobpick
Copy link

Bobpick commented May 10, 2024

I installed it without any issues. However the embeddings took quite some time and I ended it. I tried to restart it but it threw an error. I then replaced all the files except the vault and now I have this:

Cloning into 'easy-local-rag'...
remote: Enumerating objects: 146, done.
remote: Counting objects: 100% (29/29), done.
remote: Compressing objects: 100% (28/28), done.
remote: Total 146 (delta 12), reused 3 (delta 1), pack-reused 117ReceReceiving objects:  89% (130/146)
Receiving objects: 100% (146/146), 63.38 KiB | 1.06 MiB/s, done.
Resolving deltas: 100% (72/72), done.
PS C:\Users\Bob\easy-local-rag> cd ..
PS C:\Users\Bob> git clone  https://github.com/AllAboutAI-YT/easy-local-rag.git
fatal: destination path 'easy-local-rag' already exists and is not an empty directory.
PS C:\Users\Bob> cd .\easy-local-rag\
PS C:\Users\Bob\easy-local-rag> python .\localrag_no_rewrite.py
Traceback (most recent call last):
  File "C:\Users\Bob\easy-local-rag\localrag_no_rewrite.py", line 92, in <module>
    response = ollama.embeddings(model='mxbai-embed-large', prompt=content)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\ollama\_client.py", line 198, in embeddings
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Python312\Lib\site-packages\ollama\_client.py", line 73, in _request
    raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: failed to generate embedding
PS C:\Users\Bob\easy-local-rag>

Any thoughts?

@AllAboutAI-YT
Copy link
Owner

did you pull the embeddings model?:)

@Bobpick
Copy link
Author

Bobpick commented May 11, 2024

Yes, in the initial install and when I ran it, it ran for 2 hours before I ended it. I made vault smaller and still the embedding just hangs.

@jwai99
Copy link

jwai99 commented May 12, 2024

i also have the same issue running on linux with amd gpu using rocm in a cuda env

Screenshot from 2024-05-12 03-20-20

ah ive fixed it using this patch on this github #14

@vitorcalvi
Copy link

Traceback (most recent call last):
File "localrag.py", line 134, in
response = ollama.embeddings(model='mxbai-embed-large', prompt=content)
File "/opt/homebrew/anaconda3/envs/RAG2/lib/python3.8/site-packages/ollama/_client.py", line 198, in embeddings
return self._request(
File "/opt/homebrew/anaconda3/envs/RAG2/lib/python3.8/site-packages/ollama/_client.py", line 73, in _request
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: failed to generate embedding

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants