leaderboard-pt-pr-bot commited on
Commit
1bd98b8
·
verified ·
1 Parent(s): 8111881

Adding the Open Portuguese LLM Leaderboard Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/eduagarcia-temp/portuguese-leaderboard-results-to-modelcard

The purpose of this PR is to add evaluation results from the Open Portuguese LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/eduagarcia-temp/portuguese-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +20 -2
README.md CHANGED
@@ -5,9 +5,9 @@ tags:
5
  - trl
6
  - sft
7
  - generated_from_trainer
 
8
  datasets:
9
  - generator
10
- base_model: HuggingFaceH4/zephyr-7b-beta
11
  model-index:
12
  - name: WeniGPT-2.2.3-Zephyr-7B-LLM_Base_2.0.3_SFT
13
  results: []
@@ -69,4 +69,22 @@ The following hyperparameters were used during training:
69
  - Transformers 4.37.0.dev0
70
  - Pytorch 2.1.0+cu118
71
  - Datasets 2.16.1
72
- - Tokenizers 0.15.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
+ base_model: HuggingFaceH4/zephyr-7b-beta
9
  datasets:
10
  - generator
 
11
  model-index:
12
  - name: WeniGPT-2.2.3-Zephyr-7B-LLM_Base_2.0.3_SFT
13
  results: []
 
69
  - Transformers 4.37.0.dev0
70
  - Pytorch 2.1.0+cu118
71
  - Datasets 2.16.1
72
+ - Tokenizers 0.15.0
73
+
74
+ # Open Portuguese LLM Leaderboard Evaluation Results
75
+
76
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/Weni/WeniGPT-2.2.3-Zephyr-7B-LLM_Base_2.0.3_SFT) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
77
+
78
+ | Metric | Value |
79
+ |--------------------------|---------|
80
+ |Average |**45.86**|
81
+ |ENEM Challenge (No Images)| 23.58|
82
+ |BLUEX (No Images) | 28.79|
83
+ |OAB Exams | 26.33|
84
+ |Assin2 RTE | 87.01|
85
+ |Assin2 STS | 28.33|
86
+ |FaQuAD NLI | 44.66|
87
+ |HateBR Binary | 66.91|
88
+ |PT Hate Speech Binary | 56.25|
89
+ |tweetSentBR | 50.88|
90
+