athirdpath commited on
Commit
56d9833
1 Parent(s): 1e8448a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -8
README.md CHANGED
@@ -1,23 +1,32 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
3
  pipeline_tag: text-generation
4
  ---
 
 
 
 
 
 
 
5
 
6
- ### Current State
7
 
8
- Well, I've reduced per-step loss by 0.36 (from 1.57 to 1.21) in a third of an epoch. To compare, the (meh) 13b Mistral glue LoRA reduced per-step loss by 0.37 (2.16 to 1.79) over an entire 4 epochs!
9
 
10
- EDIT: First 2 evals both came in at < 1, MUCH better than the 13b attempt. Verdict is that Mistral (or 7Bs in general, I'd guess) can't survive more than one cut, meaningfully. Per-step loss reduced by 0.47 @ 1 epoch.
11
 
12
- EDIT 2: Halfway there. Eval loss < 0.85 for the last 2 evals, promising. Per-step loss down to ~1.07, a reduction of ~33%!
13
 
14
- EDIT 3: 80% done. Curve is greatly flattened, so 3 epochs seems like it was the right call. Eval down to 0.81 and per-step loss down to 0.93. Can't wait to test!
15
-
16
- EDIT 4: Done! Testing time.
17
 
18
  ### Dataset
19
 
20
  The 11b glue consists of:
21
  - The entirety of HF No Robots.
22
  - The entirety of TinyPixel/orca-mini
23
- - Enough of the GPT-4 generated Alpaca dataset (randomly chosen) to make it a roughly even three-way split.
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ language:
4
+ - en
5
  pipeline_tag: text-generation
6
  ---
7
+ <p align="center"><font size="7"> <b>Okay, here we fuckin' go.</b> </font></p>
8
+ <p align="center"><font size="5"> <b>Time to fire up the ol' dare_ties pod.</b></font></p>
9
+ <p align="center"><img src="https://iili.io/JzixYiP.png"/>
10
+ <p align="center"><font size="6"><b><a href="https://iili.io/Jzix7WB.png">NSFW - Erotic(?) Writing Example - NSFW</font></a></b></p>
11
+ <p align="center"><font size="3"> <b>(That's not what it's finetuned for, okay? He's a grower.)</b></font></p>
12
+
13
+ ### Training Log
14
 
15
+ 1) Well, I've reduced per-step loss by 0.36 (from 1.57 to 1.21) in a third of an epoch. To compare, the (meh) 13b Mistral glue LoRA reduced per-step loss by 0.37 (2.16 to 1.79) over an entire 4 epochs!
16
 
17
+ 2) First 2 evals both came in at < 1, MUCH better than the 13b attempt. Verdict is that Mistral (or 7Bs in general, I'd guess) can't survive more than one cut, meaningfully. Per-step loss reduced by 0.47 @ 1 epoch.
18
 
19
+ 3) Halfway there. Eval loss < 0.85 for the last 2 evals, promising. Per-step loss down to ~1.07, a reduction of ~33%!
20
 
21
+ 4) 80% done. Curve is greatly flattened, so 3 epochs seems like it was the right call. Eval down to 0.81 and per-step loss down to 0.93. Can't wait to test!
22
 
23
+ 5) Done! Testing time.
 
 
24
 
25
  ### Dataset
26
 
27
  The 11b glue consists of:
28
  - The entirety of HF No Robots.
29
  - The entirety of TinyPixel/orca-mini
30
+ - Enough of the GPT-4 generated Alpaca dataset (randomly chosen) to make it a roughly even three-way split.
31
+
32
+ JSONL file of dataset available as a repo.