Skip to content

Latest commit

 

History

History
86 lines (63 loc) · 3.98 KB

index.md

File metadata and controls

86 lines (63 loc) · 3.98 KB
layout title notitle
default
Home
true

MLC LLM

MLC LLM is a universal solution that allows any language model to be deployed natively on a diverse set of hardware backends and native applications, plus a productive framework for everyone to further optimize model performance for their own use cases. Everything runs locally with no server support and accelerated with local GPUs on your phone and laptop. Check out our GitHub repository to see how we did it. You can also read through instructions below for trying out demos.

Try it out

This section contains the instructions to run large-language models and chatbot natively on your environment.

iPhone

Try out this TestFlight page (limited to the first 9000 users) to install and use our example iOS chat app built for iPhone. Our app itself needs about 4GB of memory to run. Considering the iOS and other running applications, we will need a recent iPhone with 6GB (or more) of memory to run the app. We only tested the application on iPhone 14 Pro Max and iPhone 12 Pro. You can also check out our GitHub repo to build the iOS app from source.

Note: The text generation speed on the iOS app can be unstable from time to time. It might run slow in the beginning and recover to a normal speed then.

Windows Linux Mac

We provide a CLI (command-line interface) app to chat with the bot in your terminal. Before installing the CLI app, we should install some dependencies first.

  1. We use Conda to manage our app, so we need to install a version of conda. We can install Miniconda or Miniforge.
  2. On Windows and Linux, the chatbot application runs on GPU via the Vulkan platform. For Windows and Linux users, please install the latest Vulkan driver. For NVIDIA GPU users, please make sure to install Vulkan driver, as the CUDA driver may not be good.

After installing all the dependencies, just follow the instructions below the install the CLI app:

# Create a new conda environment and activate the environment.
conda create -n mlc-chat
conda activate mlc-chat

# Install Git and Git-LFS, which is used for downloading the model weights
# from Hugging Face.
conda install git git-lfs

# Install the chat CLI app from Conda.
conda install -c mlc-ai -c conda-forge mlc-chat-nightly

# Create a directory, download the model weights from HuggingFace, and download the binary libraries
# from GitHub.
mkdir -p dist
git lfs install
git clone https://huggingface.co/mlc-ai/demo-vicuna-v1-7b-int3 dist/vicuna-v1-7b
git clone https://github.com/mlc-ai/binary-mlc-llm-libs.git dist/lib

# Enter this line and enjoy chatting with the bot running natively on your machine!
mlc_chat_cli

Web Browser

Please check out WebLLM, our companion project that deploys models natively to browsers. Everything here runs inside the browser with no server support and accelerated with WebGPU.

Links

  • Check out our GitHub repo to see how we build, optimize and deploy the bring large-language models to various devices and backends.
  • Check out our companion project WebLLM to run the chatbot purely in your browser.
  • You might also be interested in Web Stable Diffusion, which runs the stable-diffusion model purely in the browser.
  • You might want to check out our online public Machine Learning Compilation course for a systematic walkthrough of our approaches.

Disclaimer

The pre-packaged demos are for research purposes only, subject to the model License.