jerryzh168 commited on
Commit
2bea06d
·
verified ·
1 Parent(s): 8355945

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -312,7 +312,12 @@ VLLM_DISABLE_COMPILE_CACHE=1 python benchmarks/benchmark_latency.py --input-len
312
 
313
  We benchmarked the throughput in a serving environment.
314
 
315
- Download sharegpt dataset: `wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json`
 
 
 
 
 
316
  Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks
317
 
318
  Note: you can change the number of prompts to be benchmarked with `--num-prompts` argument for `benchmark_serving` script.
 
312
 
313
  We benchmarked the throughput in a serving environment.
314
 
315
+ Download sharegpt dataset:
316
+
317
+ ```Shell
318
+ wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
319
+ ```
320
+
321
  Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks
322
 
323
  Note: you can change the number of prompts to be benchmarked with `--num-prompts` argument for `benchmark_serving` script.