breakcore2 commited on
Commit
8b38c4e
1 Parent(s): 71fb72a

d-adaptation notes

Browse files
Files changed (2) hide show
  1. .gitattributes +3 -0
  2. d-adaptation/notes.md +21 -0
.gitattributes CHANGED
@@ -32,3 +32,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ *.jpg filter=lfs diff=lfs merge=lfs -text
36
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
37
+ *.png filter=lfs diff=lfs merge=lfs -text
d-adaptation/notes.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # D-Adaptation Experiment Notes
2
+
3
+ ## Learning rates
4
+ Unet 1, text 0.5 as seen in thread: https://twitter.com/kohya_tech/status/1627194651034943490?cxt=HHwWhIDUtb66-pQtAAAA
5
+
6
+ ## Alpha
7
+ Alpha=Dim was recommended in the github thread https://github.com/kohya-ss/sd-scripts/issues/181
8
+ I have tried dim 8 alpha 1 with success as well as failure. Both Amber and Castoria are alpha=1 and seem to work fine.
9
+ UMP ends up with image generations that look like a single brown square, still testing if alpha has a relationship to this issue.
10
+ As noted in the same github issue, alpha/rank scaling modifies the gradient update to become smaller and thus d-adaptation to boost the learning rate. This could be the reason why it goes bad.
11
+
12
+ ## Dim
13
+ 128 dim shows some local noisy patterns. Reranking the model to a lower dim from 128 doesn't get rid of it. Converting the weights of the last up block in the unet does but also causes a noticable change in the generated character. Obviously you could reduce the last up block by a smaller amount.
14
+ Lower dims show good performance. Need much larger test to check for accuracy between them.
15
+
16
+ ## Resolution
17
+ To be tested
18
+
19
+ ## 2.X models
20
+ To be tested.
21
+ Candidate base models: wd1.5, replicant, subtly