HaileyStorm commited on
Commit
3221815
·
verified ·
1 Parent(s): 5e9a153

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -10,8 +10,10 @@ tags:
10
  ---
11
  # merged
12
 
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B models.
 
14
  Mostly, this is a test of pruning & healing an instruct-tuned model.
 
15
  This size should allow bf16 inference on 24GB VRAM, Q8 or Q6 inference on 6GB VRAM, Q5 inference on 4GB VRAM, and fine-tuning ... well, with less VRAM than an 8B model.
16
 
17
  ## Merge Details
 
10
  ---
11
  # merged
12
 
13
+ This is a "merge" of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
+ It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B parameter.
15
  Mostly, this is a test of pruning & healing an instruct-tuned model.
16
+ THIS MODEL HAS NOT BEEN HEALED. It is presently unusable. The healed version will be in a different repository.
17
  This size should allow bf16 inference on 24GB VRAM, Q8 or Q6 inference on 6GB VRAM, Q5 inference on 4GB VRAM, and fine-tuning ... well, with less VRAM than an 8B model.
18
 
19
  ## Merge Details