Adding the Open Portuguese LLM Leaderboard Evaluation Results

#15
Files changed (1) hide show
  1. README.md +25 -6
README.md CHANGED
@@ -1,5 +1,7 @@
1
  ---
2
- base_model: 01-ai/Yi-34B
 
 
3
  tags:
4
  - yi
5
  - instruct
@@ -8,14 +10,12 @@ tags:
8
  - gpt4
9
  - synthetic data
10
  - distillation
 
 
 
11
  model-index:
12
  - name: Nous-Hermes-2-Yi-34B
13
  results: []
14
- license: apache-2.0
15
- language:
16
- - en
17
- datasets:
18
- - teknium/OpenHermes-2.5
19
  ---
20
 
21
  # Nous Hermes 2 - Yi-34B
@@ -212,3 +212,22 @@ In LM-Studio, simply select the ChatML Prefix on the settings side pane:
212
  GGUF: https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B-GGUF
213
 
214
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  tags:
6
  - yi
7
  - instruct
 
10
  - gpt4
11
  - synthetic data
12
  - distillation
13
+ base_model: 01-ai/Yi-34B
14
+ datasets:
15
+ - teknium/OpenHermes-2.5
16
  model-index:
17
  - name: Nous-Hermes-2-Yi-34B
18
  results: []
 
 
 
 
 
19
  ---
20
 
21
  # Nous Hermes 2 - Yi-34B
 
212
  GGUF: https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B-GGUF
213
 
214
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
215
+
216
+
217
+ # Open Portuguese LLM Leaderboard Evaluation Results
218
+
219
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/NousResearch/Nous-Hermes-2-Yi-34B) and on the [πŸš€ Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
220
+
221
+ | Metric | Value |
222
+ |--------------------------|---------|
223
+ |Average |**72.42**|
224
+ |ENEM Challenge (No Images)| 73.13|
225
+ |BLUEX (No Images) | 65.79|
226
+ |OAB Exams | 55.99|
227
+ |Assin2 RTE | 92.15|
228
+ |Assin2 STS | 79.85|
229
+ |FaQuAD NLI | 76.05|
230
+ |HateBR Binary | 77.04|
231
+ |PT Hate Speech Binary | 66.08|
232
+ |tweetSentBR | 65.69|
233
+