Update README.md
Browse files
README.md
CHANGED
@@ -13,14 +13,26 @@ It based on the technique described in the blog post "[Refusal in LLMs is mediat
|
|
13 |
|
14 |
Thanks to Andy Arditi, Oscar Balcells Obeso, Aaquib111, Wes Gurnee, Neel Nanda, and failspy.
|
15 |
|
|
|
|
|
|
|
|
|
16 |
## β‘ Quantization
|
17 |
|
18 |
* **GGUF**: https://huggingface.co/mlabonne/Daredevil-8B-abliterated-GGUF
|
19 |
|
20 |
## π Evaluation
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
### Nous
|
23 |
|
|
|
|
|
24 |
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|
25 |
|---|---:|---:|---:|---:|---:|
|
26 |
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
|
@@ -29,4 +41,8 @@ Thanks to Andy Arditi, Oscar Balcells Obeso, Aaquib111, Wes Gurnee, Neel Nanda,
|
|
29 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
|
30 |
| [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [π](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 |
|
31 |
| [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [π](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 |
|
32 |
-
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
Thanks to Andy Arditi, Oscar Balcells Obeso, Aaquib111, Wes Gurnee, Neel Nanda, and failspy.
|
15 |
|
16 |
+
## π Applications
|
17 |
+
|
18 |
+
This is an uncensored model. You can use it for any application that doesn't require alignment, like role-playing.
|
19 |
+
|
20 |
## β‘ Quantization
|
21 |
|
22 |
* **GGUF**: https://huggingface.co/mlabonne/Daredevil-8B-abliterated-GGUF
|
23 |
|
24 |
## π Evaluation
|
25 |
|
26 |
+
### Open LLM Leaderboard
|
27 |
+
|
28 |
+
Daredevil-8B-abliterated is the second best-performing 8B model on the Open LLM Leaderboard in terms of MMLU score (27 May 24).
|
29 |
+
|
30 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/xFKhGdSaIxL9_tcJPhM5w.png)
|
31 |
+
|
32 |
### Nous
|
33 |
|
34 |
+
Eevaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
|
35 |
+
|
36 |
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|
37 |
|---|---:|---:|---:|---:|---:|
|
38 |
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
|
|
|
41 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
|
42 |
| [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [π](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 |
|
43 |
| [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [π](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 |
|
44 |
+
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
|
45 |
+
|
46 |
+
## π³ Model family tree
|
47 |
+
|
48 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/LplqNg6iXHm_JXfX02Aj1.png)
|