1lint commited on
Commit
76cfa9c
1 Parent(s): aa92894

initial commit

Browse files
A1111_webui_weights/anime_styler-dreamshaper-no_hint-v0.1.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef43d7a0f131ccec70a0a99ccc3db0f7f58e88ddb44050a80fd0672a60337d6b
3
+ size 722596338
A1111_webui_weights/anime_styler-dreamshaper-v0.1.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26b896ee1b5ae1321f055bbd712cfa2b7eb90d7096c80b64973ee33119a89204
3
+ size 722596338
README.md CHANGED
@@ -1,3 +1,43 @@
1
  ---
2
  license: openrail
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: openrail
3
  ---
4
+
5
+ ## [Try Style Controlnet with A1111 WebUI](https://github.com/1lint/style_controlnet)
6
+
7
+ Use anime styling controlnet with A1111 Stable Diffusion WebUI by downloading the weights from the A1111_webui_weights folder inside this repository. These weights can be used directly with the existing [A1111 Webui Controlnet Extension](https://github.com/Mikubill/sd-webui-controlnet), see this reddit post for [instructions](https://www.reddit.com/r/StableDiffusion/comments/119o71b/a1111_controlnet_extension_explained_like_youre_5/) on using the controlnet extension.
8
+
9
+ For each anime controlnet there is a standard variant, and a no-hint variant.
10
+
11
+ ### TLDR: download the standard variant (i.e `anime_styler-dreamshaper-v0.1.safetensors`), pass a black square as the controlnet conditioning image if you only want to add anime style guidance to image generation, or pass an anime image with canny preprocessing if you want to add both anime style and canny guidance to the image. See `assets` folder for example hints. See below for more detailed explanation.
12
+ _________________________________________________
13
+
14
+ ### Generated using `anime_styler-dreamshaper-v0.1.safetensors` controlnet with canny hint
15
+ ![](./assets/hint_grid.png)
16
+ _________________________________________________
17
+ ### Generated using `anime_styler-dreamshaper-v0.1.safetensors` controlnet with black square (numpy array of zeros) as hint
18
+ ![](./assets/zerohint_grid.png)
19
+ _________________________________________________
20
+ ### Generated using `anime_styler-dreamshaper-no_hint-v0.1.safetensors` controlnet with no hint passed (in A1111 webui, can input any image as hint, signal will always be zeroed out)
21
+ ![](./assets/nohint_grid.png)
22
+ _________________________________________________
23
+
24
+ ### Grid from left to right: Controlnet weight 0.0 (base model output), Controlnet weight 0.5, Controlnet weight 1.0, Controlnet hint (white means no controlnet hint passed)
25
+
26
+ Generation settings for examples: Prompt: "1girl, blue eyes", Seed: 2048, all other settings are A1111 Webui defaults
27
+
28
+ Base model used for examples: [Dreamshaper](https://civitai.com/models/4384/dreamshaper)
29
+ _________________________________________________
30
+
31
+ ## Details
32
+
33
+ Unlike the original controlnets, these controlnets were initialized from a distinct UNet (`andite/anything-v4.5`), and predominantly trained without any controlnet conditioning image on a distinct dataset (`lint/anybooru`) from the base model. Then the main controlnet weights were frozen, the input hint block weights added back in and trained on the same dataset using canny image processing to generate the controlnet conditioning image.
34
+
35
+ I originally trained the anime style controlnets without any controlnet conditioning image, so that the controlnet would focus on adding anime style rather than structure to the image. I have these weights saved at https://huggingface.co/lint/anime_styler/tree/main/A1111_webui_weights, however they need to be used with my [fork](https://github.com/1lint/sd-webui-controlnet) of the controlnet extension, which has very minor changes allow the user to load the controlnet without the input hint block weights, and pass None as a valid controlnet "conditioning".
36
+
37
+ Recently I added back in the input hint processing module, and trained only the controlnet input hint blocks on canny image generation. So the models in this repository are now just like regular controlnets, except for having a different initialization and training process. They can be used just like a regular controlnet, but the vast majority of the weights were trained on adding anime style, with just the input hint blocks trained on using the controlnet conditioning image. Though it seems to work alright from my limited testing so far, expect the canny image guidance to be weak so combine with original canny image controlnet as needed.
38
+
39
+ Since the main controlnet weights were trained without any canny image conditioning, they can (and were intended to be) used without any controlnet conditioning image. However the existing A1111 Controlnet Extension expects the user to always pass a controlnet conditioning image, otherwise it will trigger an error. However you can pass a black square as the "conditioning image", which will add some unexpected random noise to the image due to the input hint block `bias` weights, however the noise is small enough that the controlnet still appears to "work".
40
+
41
+ The no hint variant controlnets (i.e. `anime_styler-dreamshaper-no_hint-v0.1.safetensors`) have the input hint block weights zeroed out, so that the user can pass any controlnet conditioning image, while not introducing any noise to the image generation process. This was how the anime controlnet weights were originally trained to be used, without any controlnet conditioning image.
42
+
43
+ Right now the controlnet prompt cannot be set separately from the base prompt for the A1111 extension, but I plan to add the feature later.
assets/black.png ADDED
assets/hint.png ADDED
assets/hint_grid.png ADDED
assets/nohint_grid.png ADDED
assets/zerohint_grid.png ADDED