Upload folder using huggingface_hub
Browse files- .gitattributes +14 -0
- README.md +54 -0
- flatdolphinmaid-8x7b.Q2_K.gguf +3 -0
- flatdolphinmaid-8x7b.Q3_K_L.gguf +3 -0
- flatdolphinmaid-8x7b.Q3_K_M.gguf +3 -0
- flatdolphinmaid-8x7b.Q3_K_S.gguf +3 -0
- flatdolphinmaid-8x7b.Q4_0.gguf +3 -0
- flatdolphinmaid-8x7b.Q4_1.gguf +3 -0
- flatdolphinmaid-8x7b.Q4_K_M.gguf +3 -0
- flatdolphinmaid-8x7b.Q4_K_S.gguf +3 -0
- flatdolphinmaid-8x7b.Q5_0.gguf +3 -0
- flatdolphinmaid-8x7b.Q5_1.gguf +3 -0
- flatdolphinmaid-8x7b.Q5_K_M.gguf +3 -0
- flatdolphinmaid-8x7b.Q5_K_S.gguf +3 -0
- flatdolphinmaid-8x7b.Q6_K.gguf +3 -0
- flatdolphinmaid-8x7b.Q8_0.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,17 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
flatdolphinmaid-8x7b.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
flatdolphinmaid-8x7b.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
flatdolphinmaid-8x7b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
flatdolphinmaid-8x7b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
flatdolphinmaid-8x7b.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
flatdolphinmaid-8x7b.Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
flatdolphinmaid-8x7b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
flatdolphinmaid-8x7b.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
flatdolphinmaid-8x7b.Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
flatdolphinmaid-8x7b.Q5_1.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
flatdolphinmaid-8x7b.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
flatdolphinmaid-8x7b.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
48 |
+
flatdolphinmaid-8x7b.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
49 |
+
flatdolphinmaid-8x7b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: "FlatDolphinMaid-8x7B Quantized in GGUF"
|
3 |
+
tags:
|
4 |
+
- GGUF
|
5 |
+
language: en
|
6 |
+
---
|
7 |
+

|
8 |
+
|
9 |
+
# Tsunemoto GGUF's of FlatDolphinMaid-8x7B
|
10 |
+
|
11 |
+
This is a GGUF quantization of FlatDolphinMaid-8x7B.
|
12 |
+
|
13 |
+
## Original Repo Link:
|
14 |
+
[Original Repository](https://huggingface.co/Undi95/FlatDolphinMaid-8x7B)
|
15 |
+
|
16 |
+
## Original Model Card:
|
17 |
+
---
|
18 |
+
|
19 |
+
First experimental merge of Noromaid 8x7b (Instruct) and dolphin 8x7b. The idea behind this is to add a little more IQ to the model, because Noromaid was only trained on RP/ERP data. Dolphin 2.7 is the only real Mixtral finetune I consider "usable", and so the merging quest begin again kek.
|
20 |
+
|
21 |
+
Merged Dolphin 2.7 with Mixtral Base (Dolphin was at 1.0 weight) to get rid of ChatLM, and then I merged Noromaid 8x7b with the output, SLERP method.
|
22 |
+
|
23 |
+
This model feel better on the IQ chart and have the ~same average ERP score on ayumi bench' than Noromaid 8x7b, but it's softer and more prude too, it also have the typical Mixtral repeat issue at some point. Choose your poison.
|
24 |
+
|
25 |
+

|
26 |
+
|
27 |
+
<!-- description start -->
|
28 |
+
## Description
|
29 |
+
|
30 |
+
This repo contains fp16 files of FlatDolphinMaid-8x7B.
|
31 |
+
|
32 |
+
<!-- description end -->
|
33 |
+
<!-- description start -->
|
34 |
+
## Models used
|
35 |
+
|
36 |
+
- [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
|
37 |
+
- [cognitivecomputations/dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)
|
38 |
+
- [NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3)
|
39 |
+
|
40 |
+
<!-- description end -->
|
41 |
+
<!-- prompt-template start -->
|
42 |
+
### Custom format:
|
43 |
+
```
|
44 |
+
### Instruction:
|
45 |
+
{system prompt}
|
46 |
+
|
47 |
+
### Input:
|
48 |
+
{input}
|
49 |
+
|
50 |
+
### Response:
|
51 |
+
{reply}
|
52 |
+
```
|
53 |
+
|
54 |
+
If you want to support me, you can [here](https://ko-fi.com/undiai).
|
flatdolphinmaid-8x7b.Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9f43b5a51c844abf93578bde8df662cec2a8b5ae1aadd383190eb4053a9dcf5b
|
3 |
+
size 15459716864
|
flatdolphinmaid-8x7b.Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:73c10a8c0e992c34e94cf611e0d378403c497ab8393d75b152b1775338ff2984
|
3 |
+
size 20294487808
|
flatdolphinmaid-8x7b.Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cdfa94987cc26a4663a03eecfa31c3e3ddb82d1a00a7790754a4e140f85d4541
|
3 |
+
size 20211650304
|
flatdolphinmaid-8x7b.Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:36b592d0107bcecaa137b2ccb93903d45c0419196eaff32ce93466b4bf73e6b6
|
3 |
+
size 20121472768
|
flatdolphinmaid-8x7b.Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f0cc2e26688de0286b60ba17213ba53673cd2e9f22b6af7af1e73bbb3c711872
|
3 |
+
size 26306744064
|
flatdolphinmaid-8x7b.Q4_1.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e105aebb1c3b7ab217fd5de57afe5f86b22bec5c901b10122e5bae0e5452b6b6
|
3 |
+
size 29217459968
|
flatdolphinmaid-8x7b.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:94ab4bb579c386305447c9cdbe8814b53023a679127ca085e85015dc1b7aa2dd
|
3 |
+
size 26324045568
|
flatdolphinmaid-8x7b.Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef7d34e241220955cc741065a3aa85a9cc544098ce60c3e4cffd18a9c95bf99e
|
3 |
+
size 26308841216
|
flatdolphinmaid-8x7b.Q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bdb4e4dbb1f5ece2d08c3ebe4f1bbdcd4f12b3baa22ad301c6f9f9aca91c952c
|
3 |
+
size 32128175872
|
flatdolphinmaid-8x7b.Q5_1.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b54487e19da040455730ddd56671d31a9a61e3229ddb0108f9bfeddc3f222438
|
3 |
+
size 35038891776
|
flatdolphinmaid-8x7b.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ca7456b43334382dc9526b4559deed6e1251eacfbb6ec028ae589dd8fcab9e35
|
3 |
+
size 32137088768
|
flatdolphinmaid-8x7b.Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e303ec1422c141e7ffc207646592149c5fb3de1c3b67c38d56b9aead8548b31b
|
3 |
+
size 32128175872
|
flatdolphinmaid-8x7b.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bf674fd963498ae7d283af0dd9ced6b440f1be57545274dc3cc43290ea9d6250
|
3 |
+
size 38313447168
|
flatdolphinmaid-8x7b.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a3b28289be54c88b1beb73a1386c763d26d8b5add9df863f47aa5700465cd05c
|
3 |
+
size 49624215296
|