UsernameJustAnother commited on
Commit
cf3b4f8
1 Parent(s): a3c8a94

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -9
README.md CHANGED
@@ -13,8 +13,14 @@ tags:
13
  - writing
14
  - experimental
15
  - long-context
 
 
 
 
 
16
  ---
17
 
 
18
  <img src="https://cdn-uploads.huggingface.co/production/uploads/662c17b252e194d5d436c708/ULeHz0KITPcS0znTN7gDl.png" width="500" height="500" />
19
 
20
  # Uploaded model
@@ -23,26 +29,29 @@ tags:
23
  - **License:** apache-2.0
24
  - **Finetuned from model :** unsloth/Mistral-Nemo-Base-2407
25
 
26
- **Standard disclaimer:** This is me teaching myself the basics of fine-tuning, with notes extensively borrowed from https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9
 
 
27
 
28
  # New for v8:
29
  - Fine-tuned on Nemo Base instead of Instruct, because why not?
30
- - **FULL BORE MODE: ACTIVATE!** 10K-ish records of mostly-human convos and stories, curated by me, trained in ChatML, up from 8K in v6. Specifically:
31
  - 4K records from Reddit Writing Prompts (equal split of highest-rated sfw & nfsw)
32
- - 2K of Claude instruct, lightly curated & de-clauded.
33
- - 2K of curated Fallen Skies
34
  - 2K of curated/lightly de-ministrated C2 chat
35
  - Trained on a single 80GB A100 from runpod.io, with batch size of 8 (up from 2 on A100 40G), so far less steps involved.
 
36
 
37
- I pulled v7 because I honestly don't think it's as good as v6, and don't want folks to get the wrong idea that it's better just because the version number is higher.
38
 
39
- Props again to Unsloth.ai for letting me train this on a single A100 with variable (wildly variable) context length.
40
 
41
  Here's what the train/eval loss looked like:
42
 
43
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/662c17b252e194d5d436c708/hUKuy7ht_qObuFNDTVEe9.png)
44
 
45
- I still don't know what makes training loss drop at the end of epoch 1, or why eval loss doesn't drop down to match (it continues to decrease, but slowly).
46
 
47
  It was trained with the following settings:
48
 
@@ -91,4 +100,4 @@ lr_scheduler_kwargs = {
91
 
92
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
93
 
94
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
13
  - writing
14
  - experimental
15
  - long-context
16
+ datasets:
17
+ - kalomaze/Opus_Instruct_25k
18
+ - Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered
19
+ - Sao10K/c2-Logs-Filtered
20
+ - nothingiisreal/Reddit-Dirty-And-WritingPrompts
21
  ---
22
 
23
+ # Marlin v8: The Big Kahuna Update
24
  <img src="https://cdn-uploads.huggingface.co/production/uploads/662c17b252e194d5d436c708/ULeHz0KITPcS0znTN7gDl.png" width="500" height="500" />
25
 
26
  # Uploaded model
 
29
  - **License:** apache-2.0
30
  - **Finetuned from model :** unsloth/Mistral-Nemo-Base-2407
31
 
32
+ **Standard disclaimer:** This is me teaching myself the basics of fine-tuning, with notes extensively borrowed from https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9. Huge props to [nothingisreal](https://huggingface.co/nothingiisreal) for posting their process and making me think this was even possible for a little fish like me.
33
+
34
+ The aim here is for a solid RP/storywriting model that will fit in 16GB of VRAM with a decent amount of context (> 16K).
35
 
36
  # New for v8:
37
  - Fine-tuned on Nemo Base instead of Instruct, because why not?
38
+ - **BIG KAHUNA POWERS: ACTIVATE!** 10K-ish records of mostly-human convos and stories, trained in ChatML, up from 8K in v6. For all of these records I did additional filtering/editing/selection beyond what I think happened in Celeste v1.9, mostly to teach myself some dataset skillz, plus I added more stories. Specifically:
39
  - 4K records from Reddit Writing Prompts (equal split of highest-rated sfw & nfsw)
40
+ - 2K of Claude instruct, lightly curated & de-clauded
41
+ - 2K of curated Falling through the Skies
42
  - 2K of curated/lightly de-ministrated C2 chat
43
  - Trained on a single 80GB A100 from runpod.io, with batch size of 8 (up from 2 on A100 40G), so far less steps involved.
44
+ - And remember kids, water is wet and fish are moist.
45
 
46
+ I pulled v7 because I honestly don't think it's as good as v6, and don't want folks to get the wrong idea that it's better just because the version number is higher. Besides, nothing good ever fires on all _seven_ cylinders.
47
 
48
+ Props again to [Daniel](https://huggingface.co/danielhanchen) and [Unsloth](https://huggingface.co/unsloth) for writing magic that lets me train this on a single A100 with variable (wildly variable) context length.
49
 
50
  Here's what the train/eval loss looked like:
51
 
52
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/662c17b252e194d5d436c708/hUKuy7ht_qObuFNDTVEe9.png" width="800"/>
53
 
54
+ I still don't know what makes training loss drop at the end of epoch 1, or why eval loss doesn't drop down to match (it continues to decrease, but slowly). I did say this was experimental, right?
55
 
56
  It was trained with the following settings:
57
 
 
100
 
101
  This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
102
 
103
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)