AlekseyCalvin commited on
Commit
779c202
1 Parent(s): 4629025

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -5
README.md CHANGED
@@ -25,26 +25,32 @@ pipeline_tag: text-to-image
25
  library_name: diffusers
26
  emoji: 🔜
27
 
28
- instance_prompt: MAYAK style Constructivist Poster
29
 
30
  widget:
 
 
 
 
 
 
31
  - text: MAYAK style drawing of Osip Mandelshtam reciting /OH, BUT PETERSBURG! NO! IM NOT READY TO DIE! YOU STILL HOLD ALL THE TELEPHONE NUMBERS OF MINE!/
32
  output:
33
- url: 1730641155080__000002800_3.jpg
34
  - text: >-
35
  MAYAK style art of poet Mandelstam reading /YOU'VE RETURNED HERE, SO SWALLOW THEN, FAST AS YOU MIGHT, ALL THE FISH OIL OF LENINGRAD'S RIVERINE LIGHT!/
36
  output:
37
- url: 1730641089182___1.jpg
38
 
39
  ---
40
  <Gallery />
41
 
42
  # Mayakovsky Style Soviet Constructivist Posters & Cartoons Flux LoRA – Version 2 – by SOON®
43
- Trained via Ostris' [ai-toolkit](https://replicate.com/ostris/flux-dev-lora-trainer/train) on 50 high-resolution scans of 1910s/1920s posters & artworks by the great Soviet **poet, artist, & Marxist activist Vladimir Mayakovsky**. <br>
44
  For this training experiment, we first spent many days rigorously translating the textual elements (slogans, captions, titles, inset poems, speech fragments, etc), with form/signification/rhymes intact, throughout every image subsequently used for training. <br>
45
  These translated textographic elements were, furthermore, re-placed by us into their original visual contexts, using fonts matched up to the sources. <br>
46
  We then manually composed highly detailed paragraph-long captions, wherein we detailed both the graphic and the textual content of each piece, its layout, as well as the most intuitive/intended apprehension of each composition. <br>
47
- Training went for 5000 steps at a DiT Learning Rate of .00001, batch 1, with the adafactor optimizer, and the text encoders trained alongside the DiT!<br>
48
  No synthetic data was used for the training, nor any auto-generated captions! Everything was manually and attentively pre-curated with a deep respect for the sources used. <br>
49
 
50
  This is a **rank-64/alpha-64 Constructivist Art & Soviet Satirical Cartoon LoRA for Stable Diffusion 3.5 Large** <br>
 
25
  library_name: diffusers
26
  emoji: 🔜
27
 
28
+ instance_prompt: MAYAK style Constructivist Poster art
29
 
30
  widget:
31
+ - text: MAYAK style drawing
32
+ output:
33
+ url: 1.jpg
34
+ - text: MAYAK style
35
+ output:
36
+ url: 2.jpg
37
  - text: MAYAK style drawing of Osip Mandelshtam reciting /OH, BUT PETERSBURG! NO! IM NOT READY TO DIE! YOU STILL HOLD ALL THE TELEPHONE NUMBERS OF MINE!/
38
  output:
39
+ url: 3.jpg
40
  - text: >-
41
  MAYAK style art of poet Mandelstam reading /YOU'VE RETURNED HERE, SO SWALLOW THEN, FAST AS YOU MIGHT, ALL THE FISH OIL OF LENINGRAD'S RIVERINE LIGHT!/
42
  output:
43
+ url: 4.jpg
44
 
45
  ---
46
  <Gallery />
47
 
48
  # Mayakovsky Style Soviet Constructivist Posters & Cartoons Flux LoRA – Version 2 – by SOON®
49
+ Trained via Ostris' [ai-toolkit](https://github.com/ostris/ai-toolkit) on 50 high-resolution scans of 1910s/1920s posters & artworks by the great Soviet **poet, artist, & Marxist activist Vladimir Mayakovsky**. <br>
50
  For this training experiment, we first spent many days rigorously translating the textual elements (slogans, captions, titles, inset poems, speech fragments, etc), with form/signification/rhymes intact, throughout every image subsequently used for training. <br>
51
  These translated textographic elements were, furthermore, re-placed by us into their original visual contexts, using fonts matched up to the sources. <br>
52
  We then manually composed highly detailed paragraph-long captions, wherein we detailed both the graphic and the textual content of each piece, its layout, as well as the most intuitive/intended apprehension of each composition. <br>
53
+ This repo contains the 2400 step checkpoint and samples. Training went for 5000 steps at a DiT Learning Rate of .00001, batch 1, with the adafactor optimizer, and the text encoders trained alongside the DiT!<br>
54
  No synthetic data was used for the training, nor any auto-generated captions! Everything was manually and attentively pre-curated with a deep respect for the sources used. <br>
55
 
56
  This is a **rank-64/alpha-64 Constructivist Art & Soviet Satirical Cartoon LoRA for Stable Diffusion 3.5 Large** <br>