HaileyStorm commited on
Commit
3fcafcb
·
verified ·
1 Parent(s): c64ac4f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -11,8 +11,8 @@ tags:
11
  # merged
12
 
13
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B models.
14
- This should allow Q8 or Q6 inference on 6GB VRAM, Q5 inference on 4GB VRAM, and full-weight fine-tuning (fp16) with a context length of XX and batch size YY on 24GB VRAM.
15
- It is in dire need of healing through tuning (the purpose of this experiment - can I prune&tune an *instruct* model?).
16
 
17
  ## Merge Details
18
  ### Merge Method
 
11
  # merged
12
 
13
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B models.
14
+ Mostly, this is a test of pruning & healing an instruct-tuned model.
15
+ This size should allow Q8 or Q6 inference on 6GB VRAM, Q5 inference on 4GB VRAM, and full-weight fine-tuning ... well, with less VRAM than an 8B model.
16
 
17
  ## Merge Details
18
  ### Merge Method