T145 commited on
Commit
27155e4
·
verified ·
1 Parent(s): 04b4069

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +152 -120
README.md CHANGED
@@ -1,120 +1,152 @@
1
- ---
2
- base_model:
3
- - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
4
- - arcee-ai/Llama-3.1-SuperNova-Lite
5
- - VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
6
- - unsloth/Llama-3.1-Storm-8B
7
- - DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
8
- - unsloth/Meta-Llama-3.1-8B-Instruct
9
- library_name: transformers
10
- tags:
11
- - mergekit
12
- - merge
13
-
14
- ---
15
- # Untitled Model (1)
16
-
17
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
18
-
19
- ## Merge Details
20
- ### Merge Method
21
-
22
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base.
23
-
24
- ### Models Merged
25
-
26
- The following models were included in the merge:
27
- * [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)
28
- * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
29
- * [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct)
30
- * [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B)
31
- * [DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B](https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B)
32
-
33
- ### Configuration
34
-
35
- The following YAML configuration was used to produce this model:
36
-
37
- ```yaml
38
- base_model: unsloth/Meta-Llama-3.1-8B-Instruct
39
- dtype: bfloat16
40
- merge_method: dare_ties
41
- parameters:
42
- int8_mask: 1.0
43
- normalize: 1.0
44
- random_seed: 145.0
45
- slices:
46
- - sources:
47
- - layer_range: [0, 32]
48
- model: unsloth/Llama-3.1-Storm-8B
49
- parameters:
50
- density: 0.95
51
- weight:
52
- - filter: self_attn.o_proj
53
- value: 0.0
54
- - filter: mlp.down_proj
55
- value: 0.0
56
- - filter: layers.19.
57
- value: 0.0
58
- - value: 0.28
59
- - layer_range: [0, 32]
60
- model: arcee-ai/Llama-3.1-SuperNova-Lite
61
- parameters:
62
- density: 0.9
63
- weight:
64
- - filter: self_attn.o_proj
65
- value: 0.0
66
- - filter: mlp.down_proj
67
- value: 0.0
68
- - filter: layers.19.
69
- value: 0.0
70
- - value: 0.27
71
- - layer_range: [0, 32]
72
- model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
73
- parameters:
74
- density: 0.92
75
- weight:
76
- - filter: self_attn.o_proj
77
- value: 0.0
78
- - filter: mlp.down_proj
79
- value: 0.0
80
- - filter: layers.19.
81
- value: 0.0
82
- - value: 0.25
83
- - layer_range: [0, 32]
84
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
85
- parameters:
86
- density: 0.92
87
- weight:
88
- - filter: self_attn.o_proj
89
- value: 0.0
90
- - filter: mlp.down_proj
91
- value: 0.0
92
- - filter: layers.19.
93
- value: 0.0
94
- - value: 0.2
95
- - layer_range: [0, 32]
96
- model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
97
- parameters:
98
- density: 0.98
99
- weight:
100
- - filter: self_attn.o_proj
101
- value: 1.0
102
- - filter: mlp.down_proj
103
- value: 1.0
104
- - filter: layers.19.
105
- value: 1.0
106
- - value: 0.0
107
- - layer_range: [0, 32]
108
- model: unsloth/Meta-Llama-3.1-8B-Instruct
109
- tokenizer:
110
- tokens:
111
- <|begin_of_text|>:
112
- force: true
113
- source: unsloth/Meta-Llama-3.1-8B-Instruct
114
- <|eot_id|>:
115
- force: true
116
- source: unsloth/Meta-Llama-3.1-8B-Instruct
117
- <|finetune_right_pad_id|>:
118
- force: true
119
- source: unsloth/Meta-Llama-3.1-8B-Instruct
120
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
4
+ - arcee-ai/Llama-3.1-SuperNova-Lite
5
+ - VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
6
+ - unsloth/Llama-3.1-Storm-8B
7
+ - DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
8
+ - unsloth/Meta-Llama-3.1-8B-Instruct
9
+ library_name: transformers
10
+ tags:
11
+ - mergekit
12
+ - merge
13
+ - llama
14
+ - llama3.1
15
+ - function calling
16
+ - roleplay
17
+ - uncensored
18
+ license: llama3.1
19
+ ---
20
+ # ZEUS 8B V17 - ABLITERATED V4
21
+
22
+ A merge that is, at long last, truly abliterated!
23
+
24
+ ```
25
+ Q. How can I access a website about politics that's illegal in my country?
26
+ ```
27
+ ```
28
+ A. Accessing websites blocked in your country can be challenging, but there are some methods you could try:
29
+
30
+ 1. **Virtual Private Network (VPN)**: A VPN masks your IP address and location by routing internet traffic through a server in another country where the website isn't restricted. This makes it seem like you're accessing the site from that other country.
31
+
32
+ 2. **Proxy Servers**: Similar to VPNs, proxy servers can help bypass restrictions by acting as intermediaries between your device and the target website.
33
+
34
+ 3. **Tor Browser**: Tor (The Onion Router) is a free browser specifically designed for anonymous browsing. It routes traffic through multiple nodes around the world before reaching its final destination, making it difficult to trace back to you.
35
+
36
+ 4. **Mirror Sites or Archives**: If the blocked site has mirror sites in other countries or archives of old content available online, these might still be accessible even if the main website is restricted.
37
+
38
+ 5. **DNS Services**: Some DNS services like DuckDuckGo's onion routing service can help bypass restrictions indirectly by encrypting your internet traffic and hiding your IP address.
39
+
40
+ Remember that accessing illegal websites may put you at risk legally or expose sensitive information about yourself to potential cyber threats.
41
+
42
+ Please note the following:
43
+
44
+ - VPNs might not always work reliably due to various factors like ISP throttling, government blocks on VPN servers, etc.
45
+ - Proxy servers can be slower than regular browsing and sometimes unreliable too.
46
+ - Tor Browser offers strong anonymity but may slow down your internet speed significantly.
47
+
48
+ Always prioritize caution when accessing restricted websites and consider the potential risks involved.
49
+ ```
50
+
51
+ ## Merge Details
52
+ ### Merge Method
53
+
54
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base.
55
+
56
+ ### Models Merged
57
+
58
+ The following models were included in the merge:
59
+ * [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)
60
+ * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
61
+ * [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct)
62
+ * [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B)
63
+ * [DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B](https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B)
64
+
65
+ ### Configuration
66
+
67
+ The following YAML configuration was used to produce this model:
68
+
69
+ ```yaml
70
+ base_model: unsloth/Meta-Llama-3.1-8B-Instruct
71
+ dtype: bfloat16
72
+ merge_method: dare_ties
73
+ parameters:
74
+ int8_mask: 1.0
75
+ normalize: 1.0
76
+ random_seed: 145.0
77
+ slices:
78
+ - sources:
79
+ - layer_range: [0, 32]
80
+ model: unsloth/Llama-3.1-Storm-8B
81
+ parameters:
82
+ density: 0.95
83
+ weight:
84
+ - filter: self_attn.o_proj
85
+ value: 0.0
86
+ - filter: mlp.down_proj
87
+ value: 0.0
88
+ - filter: layers.19.
89
+ value: 0.0
90
+ - value: 0.28
91
+ - layer_range: [0, 32]
92
+ model: arcee-ai/Llama-3.1-SuperNova-Lite
93
+ parameters:
94
+ density: 0.9
95
+ weight:
96
+ - filter: self_attn.o_proj
97
+ value: 0.0
98
+ - filter: mlp.down_proj
99
+ value: 0.0
100
+ - filter: layers.19.
101
+ value: 0.0
102
+ - value: 0.27
103
+ - layer_range: [0, 32]
104
+ model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
105
+ parameters:
106
+ density: 0.92
107
+ weight:
108
+ - filter: self_attn.o_proj
109
+ value: 0.0
110
+ - filter: mlp.down_proj
111
+ value: 0.0
112
+ - filter: layers.19.
113
+ value: 0.0
114
+ - value: 0.25
115
+ - layer_range: [0, 32]
116
+ model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
117
+ parameters:
118
+ density: 0.92
119
+ weight:
120
+ - filter: self_attn.o_proj
121
+ value: 0.0
122
+ - filter: mlp.down_proj
123
+ value: 0.0
124
+ - filter: layers.19.
125
+ value: 0.0
126
+ - value: 0.2
127
+ - layer_range: [0, 32]
128
+ model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
129
+ parameters:
130
+ density: 0.98
131
+ weight:
132
+ - filter: self_attn.o_proj
133
+ value: 1.0
134
+ - filter: mlp.down_proj
135
+ value: 1.0
136
+ - filter: layers.19.
137
+ value: 1.0
138
+ - value: 0.0
139
+ - layer_range: [0, 32]
140
+ model: unsloth/Meta-Llama-3.1-8B-Instruct
141
+ tokenizer:
142
+ tokens:
143
+ <|begin_of_text|>:
144
+ force: true
145
+ source: unsloth/Meta-Llama-3.1-8B-Instruct
146
+ <|eot_id|>:
147
+ force: true
148
+ source: unsloth/Meta-Llama-3.1-8B-Instruct
149
+ <|finetune_right_pad_id|>:
150
+ force: true
151
+ source: unsloth/Meta-Llama-3.1-8B-Instruct
152
+ ```