Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chattr hanging while running llamagpt #50

Open
EdwardJ1n opened this issue Oct 23, 2023 · 0 comments
Open

chattr hanging while running llamagpt #50

EdwardJ1n opened this issue Oct 23, 2023 · 0 comments

Comments

@EdwardJ1n
Copy link

Hi team,

I want to build an AI-based IDE. Inspired by this GitHub Copilot in Rstudio, it's finally here!.

On my test environment, I'm running Posit Workbench on docker.

Tries on R version 4.3.1 and 4.2.3, chattr hanging while running llamagpt.

library(chattr)
chattr_use("llamagpt")
chattr_defaults(path = "/llama/chat-ubuntu-latest-avx2", model = "/llama/ggml-gpt4all--v1.3-groovy.bin")
chattr_defaults(type="chat",path = "/llama/chat-ubuntu-latest-avx2", model = "/llama/ggml-gpt4all-j-v1.3-groovy.bin")
chattr_defaults(type="console",path = "/llama/chat-ubuntu-latest-avx2", model = "/llama/ggml-gpt4all-j-v1.3-groovy.bin")
chattr_defaults(type="notebook",path = "/llama/chat-ubuntu-latest-avx2", model = "/llama/ggml-gpt4all-j-v1.3-groovy.bin")
chattr_defaults(type="script",path = "/llama/chat-ubuntu-latest-avx2", model = "/llama/ggml-gpt4all-j-v1.3-groovy.bin")
chattr::chattr_app()

Run the following command in terminal without any issue.

./chat-ubuntu-latest-avx2 -m "./ggml-gpt4all-j-v1.3-groovy.bin" -t 4

Install R package:

install.packages("remotes")
remotes::install_github("mlverse/chattr")

Sometimes, the following messages shows up:
Warning in readRDS(.) : invalid or incomplete compressed data

Looks like the model can be loaded successfully.

chattr("what is llama2")

── chattr ──

── Initializing model
LlamaGPTJ-chat (v. 0.3.0)
Your computer supports AVX2
LlamaGPTJ-chat: loading /home/[email protected]/llama/ggml-gpt4all-j-v1.3-groovy.bin
gptj_model_load: loading model from '/home/[email protected]/llama/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
gptj_model_load: ggml ctx size = 5401.45 MB
gptj_model_load: kv self size = 896.00 MB
gptj_model_load: ............................................ done
gptj_model_load: model size = 3609.38 MB / num tensors = 285
LlamaGPTJ-chat: done loading!

However, there is no expected output after that.

Can you take a look at it?

Thanks,
Edward

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant