Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added documentation on using other models via litellm. #16

Merged
merged 8 commits into from
Jan 26, 2024

Conversation

cmungall
Copy link
Member

See also #15

@justaddcoffee
Copy link
Member

possibly out of scope for this PR, but it looks GH actions is using python 3.8 here when the project requires python 3.9 or better

@caufieldjh
Copy link
Member

With some of these models there may be no response, which causes errors.
I got the following with the example question and collection, but with the phi model through ollama:

DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 0.0.0.0:8000
DEBUG:urllib3.connectionpool:http://0.0.0.0:8000 "POST /chat/completions HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=http://0.0.0.0:8000/chat/completions processing_ms=None request_id=None response_code=200
Traceback (most recent call last):
  File "/home/harry/curate-gpt/.venv/bin/curategpt", line 6, in <module>
    sys.exit(main())
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/home/harry/curate-gpt/src/curate_gpt/cli.py", line 1250, in ask
    response = chatbot.chat(query, collection=collection, conversation=conversation)
  File "/home/harry/curate-gpt/src/curate_gpt/agents/chat_agent.py", line 143, in chat
    response_text = response.text()
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/llm/models.py", line 112, in text
    self._force()
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/llm/models.py", line 106, in _force
    list(self)
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/llm/models.py", line 91, in __iter__
    for chunk in self.model.execute(
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/llm/default_plugins/openai_models.py", line 292, in execute
    response.response_json = combine_chunks(chunks)
  File "/home/harry/curate-gpt/.venv/lib/python3.10/site-packages/llm/default_plugins/openai_models.py", line 413, in combine_chunks
    content += choice["delta"]["content"]
TypeError: can only concatenate str (not "NoneType") to str

Just need a fix to make that more obvious.
Preferences on whether to put that in its own PR?

@caufieldjh
Copy link
Member

I get this same issue with mistral-7B

@caufieldjh
Copy link
Member

I think the ollama run command may not be necessary if the ollama server is already running though

@cmungall cmungall merged commit e760b91 into main Jan 26, 2024
0 of 3 checks passed
cmungall added a commit to INCATools/ontology-access-kit that referenced this pull request Mar 8, 2024
cmungall added a commit to INCATools/ontology-access-kit that referenced this pull request Mar 9, 2024
* Adding a validate_mappings implementations for LLMImplementation

* format

* LLM guide

* Adding docs on how to use other models. See also monarch-initiative/curate-gpt#16

* Implementing add/remove subset for all impls

* Update use-llms.rst
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants