Update README.md
Browse files
README.md
CHANGED
@@ -66,7 +66,7 @@ print(tokenizer.decode(model.generate(**tokenizer("There is a girl who likes adv
|
|
66 |
Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id f3b7917091afba325af3980a35d8a6dcba03dc3f
|
67 |
|
68 |
```bash
|
69 |
-
lm_eval --model hf --model_args pretrained="Intel/neural-chat-7b-v3-3-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,rte,arc_easy,arc_challenge --batch_size 128
|
70 |
```
|
71 |
|
72 |
| Metric | FP16 | INT4 |
|
|
|
66 |
Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id f3b7917091afba325af3980a35d8a6dcba03dc3f
|
67 |
|
68 |
```bash
|
69 |
+
lm_eval --model hf --model_args pretrained="Intel/neural-chat-7b-v3-3-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,rte,arc_easy,arc_challenge,mmlu --batch_size 128
|
70 |
```
|
71 |
|
72 |
| Metric | FP16 | INT4 |
|