Skip to content

[perf]: support multistream overlap(dbo) for deepseek #941

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

zxdukki
Copy link

@zxdukki zxdukki commented May 23, 2025

What this PR does / why we need it?

Based on the design of dual-batch overlap proposed by Deepseek team and also the implementation of fused moe in VLLM project, we implement the multi-stream(also known as dual-batch) overlap for deepseek+mla on Ascend NPU. We split the input batch of model into two microbatches and then overlap the comp/comm ops in attention and moe layers using two streams to improve the performance. Our approach can be easily extended when adding dispatch/combine communications for moe layer.
Compared with the previously proposed draft, we use one stream for computation ops and the other for communication ops, separately. In out opinions, it is beneficial for arranging the order of executing different ops and thus avoiding the contention of computation/communication resources.

Note that this PR is in progress. The benchmark performance will be updated soon.

ref: overlap for llama
ref: dbo in sglang

Does this PR introduce any user-facing change?

Adding an env variable "VLLM_ENABLE_DBO". Users can enable dbo by setting "VLLM_ENABLE_DBO=1"

How was this patch tested?

This patch can be tested with vllm-0.8.5 using its online service with benchmark tests. We have decoupled the func of dbo from vllm and it should be able to run without any modification to the code of vllm(some modifications is better to implement in vllm though).

Any advice/discussion is welcome.

Performance Benchmark

We have ran the benchmark_serving script of vllm to test the performance after using dual-batch overlap.

python -m vllm.entrypoints.openai.api_server \ --model=DeepSeek-R1-W8A8 \ --trust-remote-code \ --distributed-executor-backend=mp \ -tp=16 \ --port 8006 \ --max-num-seqs 390 \ --max-model-len 32768 \ --max-num-batched-tokens 65536 \ --block-size 128 \ --compilation_config 0 \ --gpu-memory-utilization 0.90 \ --disable-log-requests \ --additional-config '{"expert_tensor_parallel_size":1,"enable_inter_dp_scheduling":true,"init_torchair_graph_batch_sizes":true,"trace_recompiles":true,"ascend_scheduler_config":{},"enable_graph_mode":false}'

and run benchmark with the parameters of :
--dataset-name random --random-input-len 4096 --random-output-len 1 --num-prompts 200 --max-concurrency 8 --request-rate 5 --metric-percentiles 90

  1. test with the version not using alltoall in Ascend A2 (tp16 ep16 + deepseek r1 w8a8)

image
prefill qps: 2.17-> 2.60

  1. test with the version using alltoall (can be further optimized by \i.e., overlapping micro-batch1's moe comp with micro-batch2's dispatch a2a comm):
    image

prefill qps: 0.90 -> 1.01
Mean TTFT:8226->7432ms

@zxdukki zxdukki force-pushed the dev_multistream_overlap branch from 943d296 to 68070f1 Compare May 23, 2025 16:05
@zxdukki zxdukki force-pushed the dev_multistream_overlap branch from b0eed8a to 9053dd1 Compare May 27, 2025 14:21
@zxdukki zxdukki marked this pull request as ready for review May 28, 2025 05:06
zxdukki added 2 commits May 28, 2025 14:05
Signed-off-by: zhuohuan <zxdu1997@gmail.com>
Signed-off-by: zhuohuan <zxdu1997@gmail.com>
@zxdukki zxdukki changed the title [feat][WIP]: support multistream overlap(dbo) for deepseek [perf]: support multistream overlap(dbo) for deepseek May 28, 2025
@zxdukki zxdukki force-pushed the dev_multistream_overlap branch from c31fa28 to 4e35808 Compare May 28, 2025 09:10
Signed-off-by: zhuohuan <zxdu1997@gmail.com>
@zxdukki zxdukki force-pushed the dev_multistream_overlap branch 3 times, most recently from 05ca45a to 58186c3 Compare May 28, 2025 13:36
Signed-off-by: zhuohuan <zxdu1997@gmail.com>
@zxdukki zxdukki force-pushed the dev_multistream_overlap branch from 58186c3 to cfe6a5a Compare May 28, 2025 13:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant