Replies: 3 comments
-
We think that using OpenSource models is the true form of AI Democratization. The current problem is that versions that don't require very expensive hardware ARE NOT COMPARABLE TO CHATGPT. For example, StableLM 7b gives some nice but not comparable results. We tried OpenAssistant/oasst-sft-6-llama-30b , the model behind HuggingChat ( performance similar to ChatGPT ) but it requires 72gb of VRam and just under 32gb of VRam when loaded in 8bit. Not everyone can afford PCs with these features. And spending money to create virtual machines on google or aws is still expensive even just for the test phases of a project. As long as it was simple projects like suggesting 5 SEO optimized titles, you could spend that $5/10 between virtual machines or API calls. But now that we work with the Autonomous Agents, tens, hundreds, thousands of inferences are made to the LLM model. Therefore, to develop an Autonomous Agent to be marketed, hundreds of dollars may be needed just for the development phase. We need new open-source solutions 👨💻 |
Beta Was this translation helpful? Give feedback.
-
What about the new stable vicuna or wizard llm? |
Beta Was this translation helpful? Give feedback.
-
I haven't thought about the vram requirements much, because i have spare gpus like 3060, 3080 and a mother board capable of 5 gpu at same time. |
Beta Was this translation helpful? Give feedback.
-
like StableVicuna, gpt4all etc etc
Beta Was this translation helpful? Give feedback.
All reactions