icefog72 commited on
Commit
648481d
1 Parent(s): b5ca453

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -8
README.md CHANGED
@@ -1,14 +1,23 @@
1
  ---
2
- base_model: []
 
 
 
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
 
 
 
 
 
7
 
8
  ---
9
- # IceLemonTeaRP-32k-7b-v2
10
 
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
 
13
  ## Merge Details
14
  ### Merge Method
@@ -18,8 +27,8 @@ This model was merged using the SLERP merge method.
18
  ### Models Merged
19
 
20
  The following models were included in the merge:
21
- * D:\FModels\Kunokukulemonchini-32k-7b
22
- * D:\FModels\Mixtral_AI_Cyber_3.m1-BigL
23
 
24
  ### Configuration
25
 
@@ -29,12 +38,12 @@ The following YAML configuration was used to produce this model:
29
 
30
  slices:
31
  - sources:
32
- - model: D:\FModels\Mixtral_AI_Cyber_3.m1-BigL
33
  layer_range: [0, 32]
34
- - model: D:\FModels\Kunokukulemonchini-32k-7b
35
  layer_range: [0, 32]
36
  merge_method: slerp
37
- base_model: D:\FModels\Kunokukulemonchini-32k-7b
38
  parameters:
39
  t:
40
  - filter: self_attn
 
1
  ---
2
+ base_model:
3
+ - icefog72/Kunokukulemonchini-32k-7b
4
+ - icefog72/Mixtral_AI_Cyber_3.m1-BigL
5
+ - LeroyDyer/Mixtral_AI_Cyber_3.m1
6
+ - Undi95/BigL-7B
7
  library_name: transformers
8
  tags:
9
  - mergekit
10
  - merge
11
+ - alpaca
12
+ - mistral
13
+ - not-for-all-audiences
14
+ - nsfw
15
+ license: cc-by-nc-4.0
16
 
17
  ---
18
+ # IceLemonTeaRP-32k-7b-4.2bpw-h6-exl2
19
 
20
+ 4.2bpw-h6-exl2 quant of [icefog72/IceLemonTeaRP-32k-7b](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b)
21
 
22
  ## Merge Details
23
  ### Merge Method
 
27
  ### Models Merged
28
 
29
  The following models were included in the merge:
30
+ * Kunokukulemonchini-32k-7b
31
+ * Mixtral_AI_Cyber_3.m1-BigL
32
 
33
  ### Configuration
34
 
 
38
 
39
  slices:
40
  - sources:
41
+ - model: Mixtral_AI_Cyber_3.m1-BigL
42
  layer_range: [0, 32]
43
+ - model: Kunokukulemonchini-32k-7b
44
  layer_range: [0, 32]
45
  merge_method: slerp
46
+ base_model: Kunokukulemonchini-32k-7b
47
  parameters:
48
  t:
49
  - filter: self_attn