jaspercatapang
commited on
Commit
·
7aeeabc
1
Parent(s):
c736208
Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ According to the leaderboard description, here are the benchmarks used for the e
|
|
33 |
|
34 |
## Leaderboard Highlights (as of July 22, 2023)
|
35 |
- GodziLLa-30B is on par with [Falcon-40B-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) (June 2023's Rank #1).
|
36 |
-
- GodziLLa-30B outperforms Meta AI's LLaMA [30B
|
37 |
- GodziLLa-30B ranks 4th worldwide, for open-source LLMs, on the [TruthfulQA](https://arxiv.org/abs/2109.07958) benchmark.
|
38 |
- GodziLLa-30B beats [GPT-3.5 175B](https://platform.openai.com/docs/models/gpt-3-5) (text-davinci-003) on the [TruthfulQA](https://arxiv.org/abs/2109.07958) benchmark and performs closely (< 4%) on the [HellaSwag](https://arxiv.org/abs/1905.07830) benchmark.*
|
39 |
|
|
|
33 |
|
34 |
## Leaderboard Highlights (as of July 22, 2023)
|
35 |
- GodziLLa-30B is on par with [Falcon-40B-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) (June 2023's Rank #1).
|
36 |
+
- GodziLLa-30B outperforms Meta AI's LLaMA [30B](https://ai.meta.com/blog/large-language-model-llama-meta-ai/) model.
|
37 |
- GodziLLa-30B ranks 4th worldwide, for open-source LLMs, on the [TruthfulQA](https://arxiv.org/abs/2109.07958) benchmark.
|
38 |
- GodziLLa-30B beats [GPT-3.5 175B](https://platform.openai.com/docs/models/gpt-3-5) (text-davinci-003) on the [TruthfulQA](https://arxiv.org/abs/2109.07958) benchmark and performs closely (< 4%) on the [HellaSwag](https://arxiv.org/abs/1905.07830) benchmark.*
|
39 |
|