dwmyoung commited on
Commit
d998ee1
1 Parent(s): ec0b8c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -20,8 +20,11 @@ I have created a new model architecture that does not require pretraining, and t
20
  Intel/orca_dpo_pairs (DPO)
21
 
22
  ### Surgery and Training
 
23
  viethq188/LeoScorpius-7B-Chat-DPO : 0 ~ 24
 
24
  upstage/SOLAR-10.7B-Instruct-v1.0 : 10 ~ 48
 
25
  Total stacking 62 Layers, qlora and dpo.
26
 
27
  ### How to Use
 
20
  Intel/orca_dpo_pairs (DPO)
21
 
22
  ### Surgery and Training
23
+
24
  viethq188/LeoScorpius-7B-Chat-DPO : 0 ~ 24
25
+
26
  upstage/SOLAR-10.7B-Instruct-v1.0 : 10 ~ 48
27
+
28
  Total stacking 62 Layers, qlora and dpo.
29
 
30
  ### How to Use