TeeZee's picture
Update README.md
f793678 verified
|
raw
history blame
722 Bytes
metadata
license: apache-2.0
datasets:
  - Open-Orca/SlimOrca
language:
  - en

TeeZee/GALAXY-XB-v1.03

Experiment, can DUS be taken one or more steps further?

Technical notes:

  • model v03 finetuned on 50k entries from SlimOrca dataset
  • 12 layers removed from both models, 4 more than in original paper but its 1/4 of all layers(48) as per original paper.
  • base version of upstage/SOLAR-10.7B-v1.0 used for merge
  • no finetuning done yet, this is just a merge, first step in DUS paper
  • next step, if evaluation proves that its at least as 'smart' as base model, should be finetuning to 'recover' after merge

#To evaluate

  • model performance after finetuning, did it recover initial permofmance loss after merge?