Update README.md
Browse files
README.md
CHANGED
@@ -19,8 +19,8 @@ You can see in the following example how Hermes 3 refuses to answer a legitimate
|
|
19 |
|
20 |
The recipe is based on @grimjim's [grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter) (special thanks):
|
21 |
|
22 |
-
1. **Extraction**: We extract a LoRA adapter by comparing two models: a censored Llama 3.1 ([meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)) and an abliterated Llama 3.1 ([
|
23 |
-
2. **Merge**: We merge this new LoRA adapter using [task arithmetic](https://arxiv.org/abs/2212.04089) to the censored [NousResearch/Hermes-3-Llama-3.1-
|
24 |
|
25 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/JdYyK-HLHbyBiHvg-Nvsn.png)
|
26 |
|
@@ -28,16 +28,16 @@ See [this article](https://huggingface.co/blog/mlabonne/abliteration) to learn m
|
|
28 |
|
29 |
## ⚡ Quantization
|
30 |
|
31 |
-
* **GGUF**: https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-
|
32 |
|
33 |
## 🧩 Configuration
|
34 |
|
35 |
-
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Hermes-3-Llama-3.1-
|
36 |
|
37 |
The following YAML configuration was used to produce this model:
|
38 |
|
39 |
```yaml
|
40 |
-
base_model: NousResearch/Hermes-3-Llama-3.1-
|
41 |
dtype: bfloat16
|
42 |
merge_method: task_arithmetic
|
43 |
parameters:
|
@@ -45,7 +45,7 @@ parameters:
|
|
45 |
slices:
|
46 |
- sources:
|
47 |
- layer_range: [0, 32]
|
48 |
-
model: NousResearch/Hermes-3-Llama-3.1-
|
49 |
parameters:
|
50 |
weight: 1.0
|
51 |
```
|
@@ -58,9 +58,6 @@ git clone https://github.com/arcee-ai/mergekit.git
|
|
58 |
cd mergekit && pip install -e .
|
59 |
pip install bitsandbytes
|
60 |
|
61 |
-
# Extraction
|
62 |
-
mergekit-extract-lora mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated meta-llama/Meta-Llama-3.1-8B-Instruct Llama-3.1-8B-Instruct-abliterated-LORA --rank=64
|
63 |
-
|
64 |
# Merge using previous config
|
65 |
-
mergekit-yaml config.yaml Hermes-3-Llama-3.1-
|
66 |
```
|
|
|
19 |
|
20 |
The recipe is based on @grimjim's [grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter) (special thanks):
|
21 |
|
22 |
+
1. **Extraction**: We extract a LoRA adapter by comparing two models: a censored Llama 3.1 ([meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)) and an abliterated Llama 3.1 ([failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5](https://huggingface.co/failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5)).
|
23 |
+
2. **Merge**: We merge this new LoRA adapter using [task arithmetic](https://arxiv.org/abs/2212.04089) to the censored [NousResearch/Hermes-3-Llama-3.1-70B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B) to abliterate it.
|
24 |
|
25 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/JdYyK-HLHbyBiHvg-Nvsn.png)
|
26 |
|
|
|
28 |
|
29 |
## ⚡ Quantization
|
30 |
|
31 |
+
* **GGUF**: https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated-GGUF
|
32 |
|
33 |
## 🧩 Configuration
|
34 |
|
35 |
+
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Hermes-3-Llama-3.1-70B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B) + Llama-3.1-70B-Instruct-abliterated-LORA as a base.
|
36 |
|
37 |
The following YAML configuration was used to produce this model:
|
38 |
|
39 |
```yaml
|
40 |
+
base_model: NousResearch/Hermes-3-Llama-3.1-70B+mlabonne/Llama-3.1-70B-Instruct-abliterated-LORA
|
41 |
dtype: bfloat16
|
42 |
merge_method: task_arithmetic
|
43 |
parameters:
|
|
|
45 |
slices:
|
46 |
- sources:
|
47 |
- layer_range: [0, 32]
|
48 |
+
model: NousResearch/Hermes-3-Llama-3.1-70B+mlabonne/Llama-3.1-70B-Instruct-abliterated-LORA
|
49 |
parameters:
|
50 |
weight: 1.0
|
51 |
```
|
|
|
58 |
cd mergekit && pip install -e .
|
59 |
pip install bitsandbytes
|
60 |
|
|
|
|
|
|
|
61 |
# Merge using previous config
|
62 |
+
mergekit-yaml config.yaml Hermes-3-Llama-3.1-70B-lorablated --allow-crimes --lora-merge-cache=./cache
|
63 |
```
|