--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation ---
Okay, here we fuckin' go.
Time to fire up the ol' dare_ties pod.
NSFW - Erotic(?) Writing Example - NSFW
(That's not what it's finetuned for, okay? He's a grower.)
### Training Log 1) Well, I've reduced per-step loss by 0.36 (from 1.57 to 1.21) in a third of an epoch. To compare, the (meh) 13b Mistral glue LoRA reduced per-step loss by 0.37 (2.16 to 1.79) over an entire 4 epochs! 2) First 2 evals both came in at < 1, MUCH better than the 13b attempt. Verdict is that Mistral (or 7Bs in general, I'd guess) can't survive more than one cut, meaningfully. Per-step loss reduced by 0.47 @ 1 epoch. 3) Halfway there. Eval loss < 0.85 for the last 2 evals, promising. Per-step loss down to ~1.07, a reduction of ~33%! 4) 80% done. Curve is greatly flattened, so 3 epochs seems like it was the right call. Eval down to 0.81 and per-step loss down to 0.93. Can't wait to test! 5) Done! Testing time. ### Dataset The 11b glue consists of: - The entirety of HF No Robots. - The entirety of TinyPixel/orca-mini - Enough of the GPT-4 generated Alpaca dataset (randomly chosen) to make it a roughly even three-way split. JSONL file of dataset available as a repo.