Technotech
commited on
Commit
·
ea74db4
1
Parent(s):
da042fb
Update README.md
Browse files
README.md
CHANGED
@@ -8,18 +8,23 @@ language:
|
|
8 |
tags:
|
9 |
- completion
|
10 |
---
|
|
|
|
|
11 |
## Info
|
12 |
|
13 |
-
Magic prompt completion model trained on a dataset 70k Stable Diffusion prompts. Base model: TinyStories-33M.
|
14 |
|
15 |
-
Model seems to be pretty decent for 33M params, but it clearly lacks much of an understanding of pretty much anything. Still, considering the size, I think it's decent. Whether you would use this over a small GPT-2 based model is up to you.
|
16 |
|
17 |
## Examples
|
18 |
|
19 |
-
|
|
|
20 |
|
21 |
(Bold text is generated by the model)
|
22 |
|
|
|
|
|
23 |
"A close shot of a bird in a jungle, **with two legs, with long hair on a tall, long brown body, long white skin, sharp teeth, high bones, digital painting, artstation, concept art, illustration by wlop,**"
|
24 |
|
25 |
"Camera shot of **a strange young girl wearing a cloak, wearing a mask in clothes, with long curly hair, long hair, black eyes, dark skin, white teeth, long brown eyes eyes, big eyes, sharp**"
|
@@ -28,6 +33,12 @@ Generation settings: `max_new_tokens=40, do_sample=True, temperature=2.0, num_be
|
|
28 |
|
29 |
"A field of flowers, camera shot, 70mm lens, **fantasy, intricate, highly detailed, artstation, concept art, sharp focus, illustration, illustration, artgerm jake daggaws, artgerm and jaggodieie brad**"
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
## Training config
|
32 |
|
33 |
- Rank 16 LoRA
|
|
|
8 |
tags:
|
9 |
- completion
|
10 |
---
|
11 |
+
# MagicPrompt TinyStories-33M (LoRA)
|
12 |
+
|
13 |
## Info
|
14 |
|
15 |
+
Magic prompt completion model trained on a dataset 70k Stable Diffusion prompts. Base model: TinyStories-33M. Inspired by [MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion).
|
16 |
|
17 |
+
Model seems to be pretty decent for 33M params due to the TinyStories base, but it clearly lacks much of an understanding of pretty much anything. Still, considering the size, I think it's decent. Whether you would use this over a small GPT-2 based model is up to you.
|
18 |
|
19 |
## Examples
|
20 |
|
21 |
+
Best generation settings I found: `max_new_tokens=40, do_sample=True, temperature=1.2, num_beams=10, no_repeat_ngram_size=2, early_stopping=True, repetition_penalty=1.35, top_k=50, top_p=0.55, eos_token_id=tokenizer.eos_token_id, pad_token_id=0` (there may be better settings).
|
22 |
+
`no_repeat_ngram_size` is important for making sure the model doesn't repeat phrases (as it is quite small).
|
23 |
|
24 |
(Bold text is generated by the model)
|
25 |
|
26 |
+
"found footage of a ufo **in the forest, by lusax, wlop, greg rutkowski, stanley artgerm, highly detailed, intricate, digital painting, artstation, concept art, smooth**"
|
27 |
+
|
28 |
"A close shot of a bird in a jungle, **with two legs, with long hair on a tall, long brown body, long white skin, sharp teeth, high bones, digital painting, artstation, concept art, illustration by wlop,**"
|
29 |
|
30 |
"Camera shot of **a strange young girl wearing a cloak, wearing a mask in clothes, with long curly hair, long hair, black eyes, dark skin, white teeth, long brown eyes eyes, big eyes, sharp**"
|
|
|
33 |
|
34 |
"A field of flowers, camera shot, 70mm lens, **fantasy, intricate, highly detailed, artstation, concept art, sharp focus, illustration, illustration, artgerm jake daggaws, artgerm and jaggodieie brad**"
|
35 |
|
36 |
+
## Next steps
|
37 |
+
|
38 |
+
- Larger dataset ie [neuralworm/stable-diffusion-discord-prompts](https://huggingface.co/datasets/neuralworm/stable-diffusion-discord-prompts) or [daspartho/stable-diffusion-prompts](https://huggingface.co/datasets/daspartho/stable-diffusion-prompts)
|
39 |
+
- More epochs
|
40 |
+
- Instead of going smaller than GPT-2 137M, fine tune a 1-7B param model
|
41 |
+
|
42 |
## Training config
|
43 |
|
44 |
- Rank 16 LoRA
|