Envoid commited on
Commit
b98dac7
·
verified ·
1 Parent(s): 0223026

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -3
README.md CHANGED
@@ -1,3 +1,18 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ tags:
4
+ - not-for-all-audiences
5
+ ---
6
+ # ATMa
7
+
8
+ *Asymmetrically Tuned Matrix*
9
+
10
+ This model is a very mid finetune of [microsoft/Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct)
11
+
12
+ Layers 1 through 15 were finetuned on one private dataset and then a LoRA of a different but similar and larger dataset was trained/applied to the entire model with a scaling factor of 1:4.
13
+
14
+ The results are mixed and it's hard to find a good use-case for this model.
15
+
16
+ All of the original scripts and code have been included in this repo.
17
+
18
+ Trained using [qlora-pipe](https://github.com/tdrussell/qlora-pipe)