Update README.md
Browse files
README.md
CHANGED
@@ -5,11 +5,14 @@ library_name: transformers
|
|
5 |
tags:
|
6 |
- mergekit
|
7 |
- merge
|
|
|
8 |
|
9 |
---
|
10 |
# merged
|
11 |
|
12 |
-
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
|
|
13 |
|
14 |
## Merge Details
|
15 |
### Merge Method
|
|
|
5 |
tags:
|
6 |
- mergekit
|
7 |
- merge
|
8 |
+
- prune
|
9 |
|
10 |
---
|
11 |
# merged
|
12 |
|
13 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B models.
|
14 |
+
This should allow Q8 or Q6 inference on 6GB VRAM, Q5 inference on 4GB VRAM, and full-weight fine-tuning (fp16) on XXGB VRAM.
|
15 |
+
It is in dire need of healing through tuning (the purpose of this experiment - can I prune&tune an *instruct* model?).
|
16 |
|
17 |
## Merge Details
|
18 |
### Merge Method
|