v000000 commited on
Commit
ed8e408
1 Parent(s): 0b82314

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - akjindal53244/Llama-3.1-Storm-8B
4
+ - Sao10K/L3.1-8B-Niitama-v1.1
5
+ library_name: transformers
6
+ tags:
7
+ - merge
8
+ - llama
9
+ ---
10
+
11
+ # Llama-3.1-Niitorm-8B
12
+
13
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/dq5jKo1eCV8qapmfF4h3V.png)
14
+
15
+ RP model, Niitama 1.1 nearswapped with one of the smartest 3.1 models, half abliterated.
16
+
17
+ # Thanks mradermacher for the quants:
18
+ * [GGUF](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-GGUF)
19
+ * [GGUF imatrix](https://huggingface.co/mradermacher/L3.1-Sthenorm-8B-i1-GGUF)
20
+
21
+ ## merge
22
+
23
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
24
+
25
+ ## Merge Details
26
+ ### Merge Method
27
+
28
+ This model was merged using the <b>NEARSWAP t0.0001</b> merge method.
29
+
30
+ ### Models Merged
31
+
32
+ The following models were included in the merge:
33
+ * [Sao10K/L3.1-8B-Niitama-v1.1](https://huggingface.co/Sao10K/L3.1-8B-Niitama-v1.1) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
34
+ * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
35
+
36
+ ### Configuration
37
+
38
+ The following YAML configuration was used to produce this model:
39
+
40
+ ```yaml
41
+ slices:
42
+ - sources:
43
+ - model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
44
+ layer_range: [0, 32]
45
+ - model: akjindal53244/Llama-3.1-Storm-8B
46
+ layer_range: [0, 32]
47
+ merge_method: nearswap
48
+ base_model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
49
+ parameters:
50
+ t:
51
+ - value: 0.0001
52
+ dtype: bfloat16
53
+ ```
54
+
55
+ # Prompt Template:
56
+ ```bash
57
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
58
+
59
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
60
+
61
+ {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
62
+
63
+ {output}<|eot_id|>
64
+
65
+ ```