mlabonne commited on
Commit
24d9c29
β€’
1 Parent(s): 6b8b3ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -13,14 +13,26 @@ It based on the technique described in the blog post "[Refusal in LLMs is mediat
13
 
14
  Thanks to Andy Arditi, Oscar Balcells Obeso, Aaquib111, Wes Gurnee, Neel Nanda, and failspy.
15
 
 
 
 
 
16
  ## ⚑ Quantization
17
 
18
  * **GGUF**: https://huggingface.co/mlabonne/Daredevil-8B-abliterated-GGUF
19
 
20
  ## πŸ† Evaluation
21
 
 
 
 
 
 
 
22
  ### Nous
23
 
 
 
24
  | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
25
  |---|---:|---:|---:|---:|---:|
26
  | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [πŸ“„](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
@@ -29,4 +41,8 @@ Thanks to Andy Arditi, Oscar Balcells Obeso, Aaquib111, Wes Gurnee, Neel Nanda,
29
  | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [πŸ“„](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
30
  | [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [πŸ“„](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 |
31
  | [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [πŸ“„](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 |
32
- | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [πŸ“„](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
 
 
 
 
 
13
 
14
  Thanks to Andy Arditi, Oscar Balcells Obeso, Aaquib111, Wes Gurnee, Neel Nanda, and failspy.
15
 
16
+ ## πŸ”Ž Applications
17
+
18
+ This is an uncensored model. You can use it for any application that doesn't require alignment, like role-playing.
19
+
20
  ## ⚑ Quantization
21
 
22
  * **GGUF**: https://huggingface.co/mlabonne/Daredevil-8B-abliterated-GGUF
23
 
24
  ## πŸ† Evaluation
25
 
26
+ ### Open LLM Leaderboard
27
+
28
+ Daredevil-8B-abliterated is the second best-performing 8B model on the Open LLM Leaderboard in terms of MMLU score (27 May 24).
29
+
30
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/xFKhGdSaIxL9_tcJPhM5w.png)
31
+
32
  ### Nous
33
 
34
+ Eevaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
35
+
36
  | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
37
  |---|---:|---:|---:|---:|---:|
38
  | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [πŸ“„](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
 
41
  | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [πŸ“„](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
42
  | [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [πŸ“„](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 |
43
  | [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [πŸ“„](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 |
44
+ | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [πŸ“„](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
45
+
46
+ ## 🌳 Model family tree
47
+
48
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/LplqNg6iXHm_JXfX02Aj1.png)