metadata
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
ATMa
Asymmetrically Tuned Matrix
This model is a very mid finetune of microsoft/Phi-3-medium-128k-instruct
Layers 1 through 15 were finetuned on one private dataset and then a LoRA of a different but similar and larger dataset was trained/applied to the entire model with a scaling factor of 1:4.
The results are mixed and it's hard to find a good use-case for this model.
All of the original scripts and code have been included in this repo.
Trained using qlora-pipe