jerryzh168 commited on
Commit
b75ca55
·
verified ·
1 Parent(s): 16f4572

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -5
README.md CHANGED
@@ -314,6 +314,8 @@ We benchmarked the throughput in a serving environment.
314
 
315
  Download sharegpt dataset: `wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json`
316
  Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks
 
 
317
  ### baseline
318
  Server:
319
  ```Shell
@@ -333,11 +335,6 @@ VLLM_DISABLE_COMPILE_CACHE=1 vllm serve pytorch/Phi-4-mini-instruct-float8dq --t
333
 
334
  Client:
335
  ```Shell
336
- python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model jerryzh168/phi4-mini-float8dq
337
- ```
338
-
339
- Or
340
- ```Shell
341
  python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model jerryzh168/phi4-mini-float8dq --num-prompts 1
342
  ```
343
 
 
314
 
315
  Download sharegpt dataset: `wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json`
316
  Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks
317
+
318
+ Note: you can change the number of prompts to be benchmarked with `--num-prompts` argument for `benchmark_serving` script.
319
  ### baseline
320
  Server:
321
  ```Shell
 
335
 
336
  Client:
337
  ```Shell
 
 
 
 
 
338
  python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model jerryzh168/phi4-mini-float8dq --num-prompts 1
339
  ```
340