willnguyen commited on
Commit
cde5a89
·
1 Parent(s): 8c88aad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -14
README.md CHANGED
@@ -23,6 +23,21 @@ base_model: meta-llama/llama-2-7b-hf
23
 
24
  **Fine-tuned from:** LLama2
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ## Instruction format
27
 
28
  ```python
@@ -84,17 +99,3 @@ with torch.inference_mode():
84
  stopping_criteria=stopping_criteria
85
  )
86
  ```
87
-
88
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
89
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_willnguyen__lacda-2-7B-chat-v0.1)
90
-
91
- | Metric | Value |
92
- |-----------------------|---------------------------|
93
- | Avg. | 43.91 |
94
- | ARC (25-shot) | 53.07 |
95
- | HellaSwag (10-shot) | 77.57 |
96
- | MMLU (5-shot) | 46.03 |
97
- | TruthfulQA (0-shot) | 44.57 |
98
- | Winogrande (5-shot) | 74.19 |
99
- | GSM8K (5-shot) | 6.29 |
100
- | DROP (3-shot) | 5.65 |
 
23
 
24
  **Fine-tuned from:** LLama2
25
 
26
+ [GitHub](https://github.com/MavosAI/LacDa)
27
+
28
+ ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
29
+
30
+ | Metric | Value |
31
+ |-----------------------|---------------------------|
32
+ | Avg. | 43.91 |
33
+ | ARC (25-shot) | 53.07 |
34
+ | HellaSwag (10-shot) | 77.57 |
35
+ | MMLU (5-shot) | 46.03 |
36
+ | TruthfulQA (0-shot) | 44.57 |
37
+ | Winogrande (5-shot) | 74.19 |
38
+ | GSM8K (5-shot) | 6.29 |
39
+ | DROP (3-shot) | 5.65 |
40
+
41
  ## Instruction format
42
 
43
  ```python
 
99
  stopping_criteria=stopping_criteria
100
  )
101
  ```