1lint
initial commit
76cfa9c
|
raw
history blame
4.96 kB
---
license: openrail
---
## [Try Style Controlnet with A1111 WebUI](https://github.com/1lint/style_controlnet)
Use anime styling controlnet with A1111 Stable Diffusion WebUI by downloading the weights from the A1111_webui_weights folder inside this repository. These weights can be used directly with the existing [A1111 Webui Controlnet Extension](https://github.com/Mikubill/sd-webui-controlnet), see this reddit post for [instructions](https://www.reddit.com/r/StableDiffusion/comments/119o71b/a1111_controlnet_extension_explained_like_youre_5/) on using the controlnet extension.
For each anime controlnet there is a standard variant, and a no-hint variant.
### TLDR: download the standard variant (i.e `anime_styler-dreamshaper-v0.1.safetensors`), pass a black square as the controlnet conditioning image if you only want to add anime style guidance to image generation, or pass an anime image with canny preprocessing if you want to add both anime style and canny guidance to the image. See `assets` folder for example hints. See below for more detailed explanation.
_________________________________________________
### Generated using `anime_styler-dreamshaper-v0.1.safetensors` controlnet with canny hint
![](./assets/hint_grid.png)
_________________________________________________
### Generated using `anime_styler-dreamshaper-v0.1.safetensors` controlnet with black square (numpy array of zeros) as hint
![](./assets/zerohint_grid.png)
_________________________________________________
### Generated using `anime_styler-dreamshaper-no_hint-v0.1.safetensors` controlnet with no hint passed (in A1111 webui, can input any image as hint, signal will always be zeroed out)
![](./assets/nohint_grid.png)
_________________________________________________
### Grid from left to right: Controlnet weight 0.0 (base model output), Controlnet weight 0.5, Controlnet weight 1.0, Controlnet hint (white means no controlnet hint passed)
Generation settings for examples: Prompt: "1girl, blue eyes", Seed: 2048, all other settings are A1111 Webui defaults
Base model used for examples: [Dreamshaper](https://civitai.com/models/4384/dreamshaper)
_________________________________________________
## Details
Unlike the original controlnets, these controlnets were initialized from a distinct UNet (`andite/anything-v4.5`), and predominantly trained without any controlnet conditioning image on a distinct dataset (`lint/anybooru`) from the base model. Then the main controlnet weights were frozen, the input hint block weights added back in and trained on the same dataset using canny image processing to generate the controlnet conditioning image.
I originally trained the anime style controlnets without any controlnet conditioning image, so that the controlnet would focus on adding anime style rather than structure to the image. I have these weights saved at https://huggingface.co/lint/anime_styler/tree/main/A1111_webui_weights, however they need to be used with my [fork](https://github.com/1lint/sd-webui-controlnet) of the controlnet extension, which has very minor changes allow the user to load the controlnet without the input hint block weights, and pass None as a valid controlnet "conditioning".
Recently I added back in the input hint processing module, and trained only the controlnet input hint blocks on canny image generation. So the models in this repository are now just like regular controlnets, except for having a different initialization and training process. They can be used just like a regular controlnet, but the vast majority of the weights were trained on adding anime style, with just the input hint blocks trained on using the controlnet conditioning image. Though it seems to work alright from my limited testing so far, expect the canny image guidance to be weak so combine with original canny image controlnet as needed.
Since the main controlnet weights were trained without any canny image conditioning, they can (and were intended to be) used without any controlnet conditioning image. However the existing A1111 Controlnet Extension expects the user to always pass a controlnet conditioning image, otherwise it will trigger an error. However you can pass a black square as the "conditioning image", which will add some unexpected random noise to the image due to the input hint block `bias` weights, however the noise is small enough that the controlnet still appears to "work".
The no hint variant controlnets (i.e. `anime_styler-dreamshaper-no_hint-v0.1.safetensors`) have the input hint block weights zeroed out, so that the user can pass any controlnet conditioning image, while not introducing any noise to the image generation process. This was how the anime controlnet weights were originally trained to be used, without any controlnet conditioning image.
Right now the controlnet prompt cannot be set separately from the base prompt for the A1111 extension, but I plan to add the feature later.