Update README.md
Browse files
README.md
CHANGED
@@ -11,8 +11,8 @@ tags:
|
|
11 |
# merged
|
12 |
|
13 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B models.
|
14 |
-
|
15 |
-
|
16 |
|
17 |
## Merge Details
|
18 |
### Merge Method
|
|
|
11 |
# merged
|
12 |
|
13 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B models.
|
14 |
+
Mostly, this is a test of pruning & healing an instruct-tuned model.
|
15 |
+
This size should allow Q8 or Q6 inference on 6GB VRAM, Q5 inference on 4GB VRAM, and full-weight fine-tuning ... well, with less VRAM than an 8B model.
|
16 |
|
17 |
## Merge Details
|
18 |
### Merge Method
|