aashish1904 commited on
Commit
e8d745e
1 Parent(s): d842a8d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - cgato/L3-TheSpice-8b-v0.8.3
6
+ - kloodia/lora-8b-medic
7
+ - NousResearch/Hermes-3-Llama-3.1-8B
8
+ - kloodia/lora-8b-physic
9
+ - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
10
+ - Blackroot/Llama-3-8B-Abomination-LORA
11
+ - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
12
+ - kloodia/lora-8b-bio
13
+ - NousResearch/Meta-Llama-3-8B
14
+ - DreadPoor/Nothing_to_see_here_-_Move_along
15
+ - hikikomoriHaven/llama3-8b-hikikomori-v0.4
16
+ - arcee-ai/Llama-3.1-SuperNova-Lite
17
+ - Blackroot/Llama3-RP-Lora
18
+ library_name: transformers
19
+ tags:
20
+ - mergekit
21
+ - merge
22
+
23
+
24
+ ---
25
+
26
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
27
+
28
+
29
+ # QuantFactory/Aspire1.2-8B-TIES-GGUF
30
+ This is quantized version of [DreadPoor/Aspire1.2-8B-TIES](https://huggingface.co/DreadPoor/Aspire1.2-8B-TIES) created using llama.cpp
31
+
32
+ # Original Model Card
33
+
34
+ # merge
35
+
36
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
37
+
38
+ ## Merge Details
39
+ ### Merge Method
40
+
41
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
42
+
43
+ ### Models Merged
44
+
45
+ The following models were included in the merge:
46
+ * [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic)
47
+ * [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic)
48
+ * [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) + [Blackroot/Llama-3-8B-Abomination-LORA](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA)
49
+ * [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) + [kloodia/lora-8b-bio](https://huggingface.co/kloodia/lora-8b-bio)
50
+ * [DreadPoor/Nothing_to_see_here_-_Move_along](https://huggingface.co/DreadPoor/Nothing_to_see_here_-_Move_along) + [hikikomoriHaven/llama3-8b-hikikomori-v0.4](https://huggingface.co/hikikomoriHaven/llama3-8b-hikikomori-v0.4)
51
+ * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) + [Blackroot/Llama3-RP-Lora](https://huggingface.co/Blackroot/Llama3-RP-Lora)
52
+
53
+ ### Configuration
54
+
55
+ The following YAML configuration was used to produce this model:
56
+
57
+ ```yaml
58
+ models:
59
+ - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2+kloodia/lora-8b-bio
60
+ parameters:
61
+ weight: 1
62
+ - model: arcee-ai/Llama-3.1-SuperNova-Lite+Blackroot/Llama3-RP-Lora
63
+ parameters:
64
+ weight: 1
65
+ - model: NousResearch/Hermes-3-Llama-3.1-8B+kloodia/lora-8b-physic
66
+ parameters:
67
+ weight: 1
68
+ - model: cgato/L3-TheSpice-8b-v0.8.3+kloodia/lora-8b-medic
69
+ parameters:
70
+ weight: 1
71
+ - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1+Blackroot/Llama-3-8B-Abomination-LORA
72
+ parameters:
73
+ weight: 1
74
+ - model: DreadPoor/Nothing_to_see_here_-_Move_along+hikikomoriHaven/llama3-8b-hikikomori-v0.4
75
+ parameters:
76
+ weight: 1
77
+
78
+ merge_method: ties
79
+ base_model: NousResearch/Meta-Llama-3-8B
80
+ parameters:
81
+ density: 1
82
+ normalize: true
83
+ int8_mask: true
84
+ dtype: bfloat16
85
+ ```
86
+