xzuyn commited on
Commit
82afbb7
1 Parent(s): 4c348a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -1
README.md CHANGED
@@ -4,7 +4,116 @@ language:
4
  base_model:
5
  - unsloth/Llama-3.2-3B-Instruct
6
  license: llama3.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
  A much further trained version, this time done with full finetuning instead of DoRA. Similar ~50/50 mix of completion and instruct data.
9
 
10
- Note: This likely has refusals like [PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B) since no focus was put on removing refusals. I'm working on a KTO DoRA to solve this, and possibly improve roleplay performance.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  base_model:
5
  - unsloth/Llama-3.2-3B-Instruct
6
  license: llama3.2
7
+ model-index:
8
+ - name: LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
9
+ results:
10
+ - task:
11
+ type: text-generation
12
+ name: Text Generation
13
+ dataset:
14
+ name: IFEval (0-Shot)
15
+ type: HuggingFaceH4/ifeval
16
+ args:
17
+ num_few_shot: 0
18
+ metrics:
19
+ - type: inst_level_strict_acc and prompt_level_strict_acc
20
+ value: 62.92
21
+ name: strict accuracy
22
+ source:
23
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
24
+ name: Open LLM Leaderboard
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: BBH (3-Shot)
30
+ type: BBH
31
+ args:
32
+ num_few_shot: 3
33
+ metrics:
34
+ - type: acc_norm
35
+ value: 23.34
36
+ name: normalized accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: MATH Lvl 5 (4-Shot)
45
+ type: hendrycks/competition_math
46
+ args:
47
+ num_few_shot: 4
48
+ metrics:
49
+ - type: exact_match
50
+ value: 11.33
51
+ name: exact match
52
+ source:
53
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
54
+ name: Open LLM Leaderboard
55
+ - task:
56
+ type: text-generation
57
+ name: Text Generation
58
+ dataset:
59
+ name: GPQA (0-shot)
60
+ type: Idavidrein/gpqa
61
+ args:
62
+ num_few_shot: 0
63
+ metrics:
64
+ - type: acc_norm
65
+ value: 3.02
66
+ name: acc_norm
67
+ source:
68
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
69
+ name: Open LLM Leaderboard
70
+ - task:
71
+ type: text-generation
72
+ name: Text Generation
73
+ dataset:
74
+ name: MuSR (0-shot)
75
+ type: TAUR-Lab/MuSR
76
+ args:
77
+ num_few_shot: 0
78
+ metrics:
79
+ - type: acc_norm
80
+ value: 4.87
81
+ name: acc_norm
82
+ source:
83
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: MMLU-PRO (5-shot)
90
+ type: TIGER-Lab/MMLU-Pro
91
+ config: main
92
+ split: test
93
+ args:
94
+ num_few_shot: 5
95
+ metrics:
96
+ - type: acc
97
+ value: 23.5
98
+ name: accuracy
99
+ source:
100
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B
101
+ name: Open LLM Leaderboard
102
  ---
103
  A much further trained version, this time done with full finetuning instead of DoRA. Similar ~50/50 mix of completion and instruct data.
104
 
105
+ Note: This likely has refusals like [PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B) since no focus was put on removing refusals. I'm working on a KTO DoRA to solve this, and possibly improve roleplay performance.
106
+
107
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
108
+
109
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B-details)
110
+
111
+ | Metric |Value|
112
+ |-------------------|----:|
113
+ |Avg. |21.50|
114
+ |IFEval (0-Shot) |62.92|
115
+ |BBH (3-Shot) |23.34|
116
+ |MATH Lvl 5 (4-Shot)|11.33|
117
+ |GPQA (0-shot) | 3.02|
118
+ |MuSR (0-shot) | 4.87|
119
+ |MMLU-PRO (5-shot) |23.50|