-
Notifications
You must be signed in to change notification settings - Fork 385
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into pre-schedule
- Loading branch information
Showing
61 changed files
with
1,943 additions
and
302 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,16 +1,35 @@ | ||
# How to Support a New Model | ||
|
||
To support a new model in SGLang, you only need to add a single file under [SGLang Models Directory](https://github.com/sgl-project/sglang/tree/main/python/sglang/srt/models). You can learn from existing model implementations and create new files for the new models. Most models are based on the transformer architecture, making them very similar. | ||
To support a new model in SGLang, you only need to add a single file under [SGLang Models Directory](https://github.com/sgl-project/sglang/tree/main/python/sglang/srt/models). | ||
You can learn from existing model implementations and create new files for the new models. | ||
For most models, you should be able to find a similar model to start with (e.g., starting from Llama). | ||
|
||
Another valuable resource is the [vLLM Models Directory](https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models). vLLM has extensive coverage of models, and SGLang has reused vLLM for most parts of the model implementations. This similarity makes it easy to port many models from vLLM to SGLang. | ||
## Test the correctness | ||
|
||
To port a model from vLLM to SGLang, you can compare these two files [SGLang LLaMA Implementation](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/llama2.py) and [vLLM LLaMA Implementation](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py). This comparison will help you understand how to convert a model implementation from vLLM to SGLang. The major difference is the replacement of PagedAttention with RadixAttention. The other parts are almost identical. Specifically, | ||
### Interactive debugging | ||
For interactive debugging, you can compare the outputs of huggingface/transformers and SGLang. | ||
The following two commands should give the same text output and very similar prefill logits. | ||
|
||
- Get the reference output by `python3 scripts/playground/reference_hf.py --model [new model]` | ||
- Get the SGLang output by `python3 -m sglang.bench_latency --correct --model [new model]` | ||
|
||
### Add the model to the test suite | ||
To make sure the new model is well maintained in the future, it is better to add it to the test suite. | ||
You can add it to the `ALL_OTHER_MODELS` list in the [test_generation_models.py](https://github.com/sgl-project/sglang/blob/main/test/srt/models/test_generation_models.py) and run the following command to test it. | ||
|
||
For example, if the model is Qwen/Qwen2-1.5B | ||
``` | ||
ONLY_RUN=Qwen/Qwen2-1.5B python3 -m unittest test_generation_models.TestGenerationModels.test_others | ||
``` | ||
|
||
## Port a model from vLLM to SGLang | ||
Another valuable resource is the [vLLM Models Directory](https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models). vLLM has extensive coverage of models, and SGLang reuses vLLM's interface and some layers to implement the models. This similarity makes it easy to port many models from vLLM to SGLang. | ||
|
||
To port a model from vLLM to SGLang, you can compare these two files [SGLang Llama Implementation](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/llama.py) and [vLLM Llama Implementation](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py). This comparison will help you understand how to convert a model implementation from vLLM to SGLang. The major difference is the replacement of Attention with RadixAttention. The other parts are almost identical. Specifically, | ||
- Replace vllm's `Attention` with `RadixAttention`. Note that you need to pass `layer_id` all the way to `RadixAttention`. | ||
- Replace vllm's `LogitsProcessor` with SGLang's `LogitsProcessor`. | ||
- Replace other vLLM layers with SGLang layers (e.g., `RMSNorm`, `SiluAndMul`). | ||
- Remove `Sample`. | ||
- Change `forward()` functions, and add `input_metadata`. | ||
- Add `EntryClass` at the end. | ||
- Test correctness by comparing the final logits and outputs of the two following commands: | ||
- `python3 scripts/playground/reference_hf.py --model [new model]` | ||
- `python3 -m sglang.bench_latency --model [new model] --correct --output-len 16 --trust-remote-code` | ||
- Update [Supported Models](https://github.com/sgl-project/sglang/tree/main?tab=readme-ov-file#supported-models) at [README](../README.md). | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
""" | ||
Usage: | ||
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct --port 30000 | ||
python openai_chat.py | ||
""" | ||
|
||
import openai | ||
from openai import OpenAI | ||
|
||
client = openai.Client(base_url="http://127.0.0.1:30000/v1", api_key="EMPTY") | ||
|
||
response = client.chat.completions.create( | ||
model="meta-llama/Meta-Llama-3.1-8B-Instruct", | ||
messages=[ | ||
{"role": "system", "content": "You are a helpful AI assistant"}, | ||
{ | ||
"role": "user", | ||
"content": """ | ||
Extract the name, size, price, and color from this product description as a JSON object: | ||
<description> | ||
The SmartHome Mini is a compact smart home assistant available in black or white for only $49.99. At just 5 inches wide, it lets you control lights, thermostats, and other connected devices via voice or app—no matter where you place it in your home. This affordable little hub brings convenient hands-free control to your smart devices. | ||
</description> | ||
""", | ||
}, | ||
{ | ||
"role": "assistant", | ||
"content": "{\n", | ||
}, | ||
], | ||
temperature=0, | ||
) | ||
|
||
print(response.choices[0].message.content) |
Oops, something went wrong.