Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,5 @@
|
|
1 |
---
|
|
|
2 |
library_name: transformers
|
3 |
tags:
|
4 |
- 4-bit
|
@@ -6,10 +7,27 @@ tags:
|
|
6 |
- text-generation
|
7 |
- autotrain_compatible
|
8 |
- endpoints_compatible
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
pipeline_tag: text-generation
|
10 |
inference: false
|
11 |
quantized_by: Suparious
|
12 |
---
|
13 |
-
#
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
library_name: transformers
|
4 |
tags:
|
5 |
- 4-bit
|
|
|
7 |
- text-generation
|
8 |
- autotrain_compatible
|
9 |
- endpoints_compatible
|
10 |
+
- moe
|
11 |
+
- frankenmoe
|
12 |
+
- merge
|
13 |
+
- mergekit
|
14 |
+
- lazymergekit
|
15 |
+
- TURKCELL/Turkcell-LLM-7b-v1
|
16 |
+
- NovusResearch/Novus-7b-tr_v1
|
17 |
+
base_model:
|
18 |
+
- TURKCELL/Turkcell-LLM-7b-v1
|
19 |
+
- NovusResearch/Novus-7b-tr_v1
|
20 |
pipeline_tag: text-generation
|
21 |
inference: false
|
22 |
quantized_by: Suparious
|
23 |
---
|
24 |
+
# ozayezerceli/Selocan-2x7B-v1 AWQ
|
25 |
|
26 |
+
- Model creator: [ozayezerceli](https://huggingface.co/ozayezerceli)
|
27 |
+
- Original model: [Selocan-2x7B-v1](https://huggingface.co/Locutusque/Selocan-2x7B-v1)
|
28 |
+
|
29 |
+
## Model Summary
|
30 |
+
|
31 |
+
Selocan-2x7B-v1 is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
32 |
+
* [TURKCELL/Turkcell-LLM-7b-v1](https://huggingface.co/TURKCELL/Turkcell-LLM-7b-v1)
|
33 |
+
* [NovusResearch/Novus-7b-tr_v1](https://huggingface.co/NovusResearch/Novus-7b-tr_v1)
|