Triangle104 commited on
Commit
b6ba3bb
1 Parent(s): e833b54

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md CHANGED
@@ -12,6 +12,64 @@ tags:
12
  This model was converted to GGUF format from [`ProdeusUnity/Astral-Fusion-8b-v0.0`](https://huggingface.co/ProdeusUnity/Astral-Fusion-8b-v0.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/ProdeusUnity/Astral-Fusion-8b-v0.0) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
 
12
  This model was converted to GGUF format from [`ProdeusUnity/Astral-Fusion-8b-v0.0`](https://huggingface.co/ProdeusUnity/Astral-Fusion-8b-v0.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/ProdeusUnity/Astral-Fusion-8b-v0.0) for more details on the model.
14
 
15
+ ---
16
+ Model details:
17
+ -
18
+ We will see... Come with me, take the journey~
19
+
20
+ Listen to the song on Youtube: https://www.youtube.com/watch?v=3FEFtFMBREA
21
+
22
+ Another attempt at a merge, not entirely related to Stellar Odyssey. I like it, so try it out?
23
+
24
+ Merged Models:
25
+
26
+ meta-llama/Llama-3-8b-Instruct
27
+ Sao10K_L3-8B-Stheno-v3.2
28
+ Gryphe_Pantheon-RP-1.0-8b-Llama-3
29
+ Celeste-Stable-v1.2
30
+
31
+ This is a merge of pre-trained language models created using mergekit.
32
+ Edit: Celeste v1.2 Stable?
33
+
34
+ That itself is a merge, more to stablize Celeste since its training was at 256. It was merged with NeuralDareDevil via TIES
35
+ Merge Details
36
+ Merge Method
37
+
38
+ This model was merged using the della_linear merge method using C:\Users\Downloads\Mergekit-Fixed\mergekit\meta-llama_Llama-3-8B-Instruct as a base.
39
+ Models Merged
40
+
41
+ The following models were included in the merge:
42
+
43
+ C:\Users\Downloads\Mergekit-Fixed\mergekit\Gryphe_Pantheon-RP-1.0-8b-Llama-3
44
+ C:\Users\Downloads\Mergekit-Fixed\mergekit\Sao10K_L3-8B-Stheno-v3.2
45
+ C:\Users\Downloads\Mergekit-Fixed\mergekit\Celeste-Stable-v1.2-Test2
46
+
47
+ Configuration
48
+
49
+ The following YAML configuration was used to produce this model:
50
+
51
+ models:
52
+ - model: C:\Users\\Downloads\Mergekit-Fixed\mergekit\Sao10K_L3-8B-Stheno-v3.2
53
+ parameters:
54
+ weight: 0.3
55
+ density: 0.25
56
+ - model: C:\Users\\Downloads\Mergekit-Fixed\mergekit\Celeste-Stable-v1.2-Test2
57
+ parameters:
58
+ weight: 0.1
59
+ density: 0.4
60
+ - model: C:\Users\\Downloads\Mergekit-Fixed\mergekit\Gryphe_Pantheon-RP-1.0-8b-Llama-3
61
+ parameters:
62
+ weight: 0.4
63
+ density: 0.5
64
+ merge_method: della_linear
65
+ base_model: C:\Users\\Downloads\Mergekit-Fixed\mergekit\meta-llama_Llama-3-8B-Instruct
66
+ parameters:
67
+ epsilon: 0.05
68
+ lambda: 1
69
+ merge_method: della_linear
70
+ dtype: bfloat16
71
+
72
+ ---
73
  ## Use with llama.cpp
74
  Install llama.cpp through brew (works on Mac and Linux)
75