Update README.md
Browse files
README.md
CHANGED
@@ -10,50 +10,30 @@ tags:
|
|
10 |
- autotrain_compatible
|
11 |
- endpoints_compatible
|
12 |
- safetensors
|
13 |
-
-
|
14 |
-
-
|
15 |
-
-
|
16 |
-
-
|
17 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
pipeline_tag: text-generation
|
19 |
inference: false
|
20 |
quantized_by: Suparious
|
21 |
---
|
22 |
-
#
|
23 |
|
24 |
-
- Model creator: [
|
25 |
-
- Original model: [
|
26 |
|
27 |
## Model Summary
|
28 |
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
Inspired by [lemonilia/Limamono-Mistral-7B-v0.50](https://huggingface.co/lemonilia/Limamono-Mistral-7B-v0.50)
|
35 |
-
### Style details:
|
36 |
-
- Quotes are used for character dialogs.
|
37 |
-
- `"Hey, Anon... What do you think about my style?"`
|
38 |
-
- Asterisks can be used for narration, but it's optional, it's recommended to use default novel format.
|
39 |
-
- `*Her cheeks blush slightly, she tries to hide.*`
|
40 |
-
- Character thoughts are wrapped with ` marks. **This may often spontaneously occur.**
|
41 |
-
- `My heart skips a beat hearing him call me pretty!`
|
42 |
-
|
43 |
-
*If you want thoughts to appear more often, just add something like this to your system prompt: ```"{{char}} internal thoughts are wrapped with ` marks."```*
|
44 |
-
|
45 |
-
- Accepted response lengths: ***tiny, short, medium, long, huge***
|
46 |
-
-
|
47 |
-
For example: ### Response: (length = medium)
|
48 |
-
|
49 |
-
Note: Apparently ***humongous***, ***extreme*** and ***unlimited*** may not work at moment. Not fully tested.
|
50 |
-
|
51 |
-
### Prompt format:
|
52 |
-
Extended Alpaca, as always.
|
53 |
-
|
54 |
-
``"You are now in roleplay chat mode. Engage in an endless chat with {{user}}. Always wait {{user}} turn, next actions and responses."``
|
55 |
-
|
56 |
-
## Example:
|
57 |
-
|
58 |
-

|
59 |
-
|
|
|
10 |
- autotrain_compatible
|
11 |
- endpoints_compatible
|
12 |
- safetensors
|
13 |
+
- moe
|
14 |
+
- frankenmoe
|
15 |
+
- merge
|
16 |
+
- mergekit
|
17 |
+
- lazymergekit
|
18 |
+
- starsnatched/MemGPT-DPO
|
19 |
+
- starsnatched/MemGPT-3
|
20 |
+
- starsnatched/MemGPT
|
21 |
+
base_model:
|
22 |
+
- starsnatched/MemGPT-DPO
|
23 |
+
- starsnatched/MemGPT-3
|
24 |
+
- starsnatched/MemGPT
|
25 |
pipeline_tag: text-generation
|
26 |
inference: false
|
27 |
quantized_by: Suparious
|
28 |
---
|
29 |
+
# liminerity/Memgpt-3x7b-MOE AWQ
|
30 |
|
31 |
+
- Model creator: [liminerity](https://huggingface.co/liminerity)
|
32 |
+
- Original model: [Memgpt-3x7b-MOE](https://huggingface.co/liminerity/Memgpt-3x7b-MOE)
|
33 |
|
34 |
## Model Summary
|
35 |
|
36 |
+
Memgpt-3x7b-MOE is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
37 |
+
* [starsnatched/MemGPT-DPO](https://huggingface.co/starsnatched/MemGPT-DPO)
|
38 |
+
* [starsnatched/MemGPT-3](https://huggingface.co/starsnatched/MemGPT-3)
|
39 |
+
* [starsnatched/MemGPT](https://huggingface.co/starsnatched/MemGPT)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|