rAIfle commited on
Commit
45679b1
·
verified ·
1 Parent(s): f52f421

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model_relation: quantized
4
+ quantized_by: Quant-Cartel
5
+ base_model: knifeayumu/Behemoth-v1.1-Magnum-v4-123B
6
+
7
+ ---
8
+ ```
9
+ e88 88e d8
10
+ d888 888b 8888 8888 ,"Y88b 888 8e d88
11
+ C8888 8888D 8888 8888 "8" 888 888 88b d88888
12
+ Y888 888P Y888 888P ,ee 888 888 888 888
13
+ "88 88" "88 88" "88 888 888 888 888
14
+ b
15
+ 8b,
16
+
17
+ e88'Y88 d8 888
18
+ d888 'Y ,"Y88b 888,8, d88 ,e e, 888
19
+ C8888 "8" 888 888 " d88888 d88 88b 888
20
+ Y888 ,d ,ee 888 888 888 888 , 888
21
+ "88,d88 "88 888 888 888 "YeeP" 888
22
+
23
+ PROUDLY PRESENTS
24
+ ```
25
+ # rAIfle/Behemoth-v1.1-Magnum-v4-123B-exl2-longcal
26
+
27
+ Quantized using 115 rows of 8192 tokens from the default ExLlamav2-calibration dataset.
28
+
29
+ Branches:
30
+ - `main` -- `measurement.json`
31
+ - 8.0b8h -- 8.0bpw, 8bit lm_head
32
+ - 6.0b6h -- 6.0bpw, 6bit lm_head
33
+ - 5.0b6h -- 5.0bpw, 6bit lm_head
34
+ - 4.25b6h -- 4.25bpw, 6bit lm_head
35
+ - 4.0b6h -- 4.0bpw, 6bit lm_head
36
+ - 3.0b6h -- 3.0bpw, 6bit lm_head
37
+ - 2.25b6h -- 2.25bpw, 6bit lm_head
38
+
39
+ Original model link: [knifeayumu/Behemoth-v1.1-Magnum-v4-123B](https://huggingface.co/knifeayumu/Behemoth-v1.1-Magnum-v4-123B)
40
+
41
+ Original model README below.
42
+
43
+ -----
44
+ ![Not Horny Enough](Behemoth-v1.1-Magnum-v4-123B.png)
45
+
46
+ # The Drummer becomes hornier
47
+
48
+ Recipe based on [MarsupialAI/Monstral-123B](https://huggingface.co/MarsupialAI/Monstral-123B) but uses [TheDrummer/Behemoth-123B-v1.1](https://huggingface.co/TheDrummer/Behemoth-123B-v1.1) as the base.
49
+
50
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).
51
+
52
+ GGUF Quants:
53
+
54
+ - GGUF (static): [mradermacher/Behemoth-v1.1-Magnum-v4-123B-GGUF](https://huggingface.co/mradermacher/Behemoth-v1.1-Magnum-v4-123B-GGUF)
55
+ - GGUF (weighted/imatrix): [mradermacher/Behemoth-v1.1-Magnum-v4-123B-i1-GGUF](https://huggingface.co/mradermacher/Behemoth-v1.1-Magnum-v4-123B-i1-GGUF)
56
+
57
+ Thank you mradermacher for honoring my request.
58
+
59
+ ## Merge Details
60
+ ### Merge Method
61
+
62
+ This model was merged using the SLERP merge method.
63
+
64
+ ### Models Merged
65
+
66
+ The following models were included in the merge:
67
+ * [anthracite-org/magnum-v4-123b](https://huggingface.co/anthracite-org/magnum-v4-123b)
68
+ * [TheDrummer/Behemoth-123B-v1.1](https://huggingface.co/TheDrummer/Behemoth-123B-v1.1)
69
+
70
+ ### Configuration
71
+
72
+ The following YAML configuration was used to produce this model:
73
+
74
+ ```yaml
75
+ models:
76
+ - model: TheDrummer/Behemoth-123B-v1.1
77
+ - model: anthracite-org/magnum-v4-123b
78
+ merge_method: slerp
79
+ base_model: TheDrummer/Behemoth-123B-v1.1
80
+ parameters:
81
+ t: [0.1, 0.3, 0.6, 0.3, 0.1]
82
+ dtype: float16
83
+ ```