wonhosong commited on
Commit
d950e2a
1 Parent(s): a25c836

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -76,7 +76,7 @@ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
76
  - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
77
  We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
78
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
79
- - We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions to evaluate models.
80
 
81
  ### Main Results
82
  | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
@@ -90,7 +90,7 @@ We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-
90
  | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
91
  | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
92
 
93
- ### Scripts
94
  - Prepare evaluation environments:
95
  ```
96
  # clone the repository
 
76
  - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
77
  We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
78
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
79
+ - We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions, to evaluate the models.
80
 
81
  ### Main Results
82
  | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
 
90
  | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
91
  | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
92
 
93
+ ### Scripts for H4 Score Reproduction
94
  - Prepare evaluation environments:
95
  ```
96
  # clone the repository