akumaburn commited on
Commit
7967f59
·
verified ·
1 Parent(s): 7d6a87d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -12
README.md CHANGED
@@ -35,24 +35,28 @@ datasets:
35
  Some GGUF quantizations are included as well.
36
 
37
  mistral-7b-openorca.Q8_0.gguf:
38
- - **MMLU-Test:** Final result: **41.5836 +/- 0.4174**
39
- - **Arc-Easy:** Final result: 72.6316 +/- 1.8691
40
- - **Truthful QA:** Final result: **32.0685 +/- 1.6339**
 
41
 
42
  llama-3-8b-bnb-4bit.Q8_0.gguf:
43
- - **MMLU-Test:** Final result: 40.4074 +/- 0.4156
44
- - **Arc-Easy:** Final result: 73.8596 +/- 1.8421
45
- - **Truthful QA:** Final result: 26.6830 +/- 1.5484
 
46
 
47
  **Open_Orca_Llama-3-8B-unsloth.Q8_0.gguf**:
48
- - **MMLU-Test:** Final result: 39.3818 +/- 0.4138
49
- - **Arc-Easy:** Final result: 67.3684 +/- 1.9656
50
- - **Truthful QA:** Final result: 29.0086 +/- 1.5886
 
51
 
52
  Meta-Llama-3-8B.Q8_0.gguf:
53
- - **MMLU-Test:** Final result: 40.8664 +/- 0.4163
54
- - **Arc-Easy:** Final result: **74.3860 +/- 1.8299**
55
- - **Truthful QA:** Final result: 28.6414 +/- 1.5826
 
56
 
57
  Llama.cpp Options For Testing:
58
  --samplers "tfs;typical;temp" --draft 32 --ctx-size 8192 --temp 0.82 --tfs 0.8 --typical 1.1 --repeat-last-n 512 --batch-size 8192 --repeat-penalty 1.0 --n-gpu-layers 100 --threads 12
 
35
  Some GGUF quantizations are included as well.
36
 
37
  mistral-7b-openorca.Q8_0.gguf:
38
+ - **MMLU-Test:** Final result: **41.5836 +/- 0.4174**
39
+ - **Arc-Easy:** Final result: 72.6316 +/- 1.8691
40
+ - **Truthful QA:** Final result: **32.0685 +/- 1.6339**
41
+ - **Arc-Challenge:** Final result: **48.8294 +/- 2.8956**
42
 
43
  llama-3-8b-bnb-4bit.Q8_0.gguf:
44
+ - **MMLU-Test:** Final result: 40.4074 +/- 0.4156
45
+ - **Arc-Easy:** Final result: 73.8596 +/- 1.8421
46
+ - **Truthful QA:** Final result: 26.6830 +/- 1.5484
47
+ - **Arc-Challenge:** Final result: 46.8227 +/- 2.8906
48
 
49
  **Open_Orca_Llama-3-8B-unsloth.Q8_0.gguf**:
50
+ - **MMLU-Test:** Final result: 39.3818 +/- 0.4138
51
+ - **Arc-Easy:** Final result: 67.3684 +/- 1.9656
52
+ - **Truthful QA:** Final result: 29.0086 +/- 1.5886
53
+ - **Arc-Challenge:** Final result: 42.1405 +/- 2.8604
54
 
55
  Meta-Llama-3-8B.Q8_0.gguf:
56
+ - **MMLU-Test:** Final result: 40.8664 +/- 0.4163
57
+ - **Arc-Easy:** Final result: **74.3860 +/- 1.8299**
58
+ - **Truthful QA:** Final result: 28.6414 +/- 1.5826
59
+ - **Arc-Challenge:** Final result: 47.1572 +/- 2.8917
60
 
61
  Llama.cpp Options For Testing:
62
  --samplers "tfs;typical;temp" --draft 32 --ctx-size 8192 --temp 0.82 --tfs 0.8 --typical 1.1 --repeat-last-n 512 --batch-size 8192 --repeat-penalty 1.0 --n-gpu-layers 100 --threads 12