breakcore2 commited on
Commit
2741780
1 Parent(s): f702d08

amber wd1.5

Browse files
waifu_diffusion_1.5/amber_(genshin_impact)/test_1_dim_1_9000_steps/aesthetic_model_grid.png ADDED

Git LFS Details

  • SHA256: 09cebbbfb6807d55a87cf7c81929a0ed3e011ac02f05a09ad5b58a998f5ca778
  • Pointer size: 132 Bytes
  • Size of remote file: 5.73 MB
waifu_diffusion_1.5/amber_(genshin_impact)/test_1_dim_1_9000_steps/amber_dim1_9000-last.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b42cb708563f9341860af62ee49e3595acfca2ef7bed393b9c22bfb24fb562f
3
+ size 1869387
waifu_diffusion_1.5/amber_(genshin_impact)/test_1_dim_1_9000_steps/base_model_grid.png ADDED

Git LFS Details

  • SHA256: 5ff9df3cf6132da45648e75fb287883b71498042dc35e213a3a062b0a3ce1660
  • Pointer size: 132 Bytes
  • Size of remote file: 7.07 MB
waifu_diffusion_1.5/notes.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # wd 1.5 beta2 lora notes
2
+
3
+ ## Base model and model transfer
4
+ Training on base version of 1.5 beta 2 allows for decent transfer over to 1.5 beta 2 aesthetic version.
5
+ There is some obvious loss to details but the performance far exceeded training inferencing across beta1, replicant, subtly.
6
+
7
+ ## Dim
8
+ Dim 1 has been tested so far and high level understanding of the character has been reached but finer details such as the crest patern and bow pattern on Amber are off or missing.
9
+ Higher dim testing is underway to investigate.
10
+
11
+ ## Steps
12
+ So far seeing a need for at least 4000 steps before improvements to character starts slowing down but still seeing improvements 7000 steps in
13
+ New parameter settings may help drive the step count down. As iit currently stands, this is an enourmous time commitment to train a character over 1.X models which do fine at 1000 steps.
14
+
15
+ ## Resolution
16
+ all testing is on 768 as the base model is trained at high resolution.
17
+
18
+ ## Settings
19
+
20
+ Dim 1 settings, 9320 steps. Decent starting around 4500 steps with increasing quality all the way to the end. This took about 5 hours on a 3060...
21
+ ```
22
+ $learning_rate=3e-4
23
+ $text_encoder_lr=2.5e-5
24
+ $unet_lr=5e-5
25
+ $lr_warmup_ratio = 0.20
26
+ $train_batch_size = 2
27
+ $num_epochs = 20
28
+ $save_every_n_epochs=3
29
+ $scheduler="cosine_with_restarts"
30
+ $network_dim=1
31
+ $network_alpha=1
32
+ ```
33
+
34
+ dim 16 settings. warm up steps reduce from 20% to 10% considering how many steps we are taking now.
35
+ ```
36
+ $learning_rate=5e-5
37
+ $text_encoder_lr=2.5e-5
38
+ $unet_lr=5e-5
39
+ $lr_warmup_ratio = 0.1
40
+ $train_batch_size = 2
41
+ $num_epochs = 20
42
+ $save_every_n_epochs=3
43
+ $scheduler="cosine_with_restarts"
44
+ $network_dim=16
45
+ $network_alpha=16
46
+ ```