Commit
·
96ccd92
1
Parent(s):
a15fc84
Update readme
Browse files
README.md
CHANGED
@@ -1,38 +1,53 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
---
|
4 |
|
5 |
# PixelNet (Thomas Eding)
|
6 |
|
7 |
### About:
|
8 |
|
9 |
-
PixelNet is a ControlNet model for Stable Diffusion.
|
|
|
|
|
|
|
|
|
10 |
|
11 |
-
|
12 |
|
13 |
-
|
|
|
|
|
14 |
|
15 |
### Usage:
|
16 |
|
17 |
-
To install, copy the `.safetensors` and `.yaml` files to your Automatic1111 ControlNet extension's model directory
|
18 |
|
19 |
-
There is no preprocessor. Instead, supply a black and white checkerboard image as the control input. Examples are in the `example-control-images` directory of this repository.
|
20 |
|
21 |
-
The script `gen_checker.py` can be used to generate checkerboard images of arbitrary sizes. Example: `python gen_checker.py --upscale-dims 512x512 --output-file 70x70.png --dims 70x70`
|
22 |
|
23 |

|
|
|
24 |

|
25 |
|
26 |
### FAQ:
|
27 |
|
28 |
Q: Why is this needed? Can't I use a post-processor to downscale the image?
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
Q: Will there be a better trained model of this in the future?
|
|
|
32 |
A: I hope so. I will need to curate a much larger and higher-quality dataset, which might take me a long time. Regardless, I plan on making the control more faithful to the control image and to generalize to more than just checkerboards.
|
33 |
|
34 |
### Sample Outputs:
|
35 |
|
36 |

|
37 |
-
|
|
|
|
|
38 |

|
|
|
1 |
+
https://huggingface.co/thomaseding/pixelnet
|
2 |
+
|
3 |
+
--- license: creativeml-openrail-m ---
|
4 |
|
5 |
# PixelNet (Thomas Eding)
|
6 |
|
7 |
### About:
|
8 |
|
9 |
+
PixelNet is a ControlNet model for Stable Diffusion.
|
10 |
+
|
11 |
+
It takes a checkerboard image as input, which is used to control where logical pixels are to be placed.
|
12 |
+
|
13 |
+
This is currently an experimental proof of concept. I trained this using on around 2000 pixel-art/pixelated images that I generated using Stable Diffusion (with a lot of cleanup and manual curation). The model is not very good, but it does work on grid sizes of about a max of 64 checker "pixels" for square generations. I did find that using 128x64 pattern still seemed to work moderately well for a 1024x512 image.
|
14 |
|
15 |
+
The model works best with the "Balanced" ControlNet setting. Try using a "Control Weight" of 1 or a little higher.
|
16 |
|
17 |
+
"ControlNet Is More Important" seems to require a heavy "Control Weight" setting to have an effect. Try using a "Control Weight" of 2.
|
18 |
+
|
19 |
+
Smaller checker grids tend to perform worse (e.g. 5x5 vs a 32x32)
|
20 |
|
21 |
### Usage:
|
22 |
|
23 |
+
To install, copy the `.safetensors` and `.yaml` files to your Automatic1111 ControlNet extension's model directory (e.g. `stable-diffusion-webui.extensions/sd-webui-controlnet/models`). Completely restart the Automatic1111 server after doing this and then refresh the web page.
|
24 |
|
25 |
+
There is no preprocessor. Instead, supply a black and white checkerboard image as the control input. Examples are in the `example-control-images` directory of this repository. (https://huggingface.co/thomaseding/pixelnet/tree/main/example-control-images)
|
26 |
|
27 |
+
The script `gen_checker.py` can be used to generate checkerboard images of arbitrary sizes. (https://huggingface.co/thomaseding/pixelnet/blob/main/gen_checker.py) Example: `python gen_checker.py --upscale-dims 512x512 --output-file 70x70.png --dims 70x70`
|
28 |
|
29 |

|
30 |
+
|
31 |

|
32 |
|
33 |
### FAQ:
|
34 |
|
35 |
Q: Why is this needed? Can't I use a post-processor to downscale the image?
|
36 |
+
|
37 |
+
A: From my experience SD has a hard time creating genuine pixel art (even with dedicated base models and loras), where it has a mismatch of logical pixel sizes, smooth curves, etc. What appears to be a straight line at a glance, might bend around. This can cause post-processors to create artifacts based on quantization rounding a pixel to a position one pixel off in some direction. This model is intended to help fix that.
|
38 |
+
|
39 |
+
Q: Should I use this model with a post-processor?
|
40 |
+
|
41 |
+
A: Yes, I still recommend you do post-processing to clean up the image. This model is not perfect and will still have artifacts.
|
42 |
|
43 |
Q: Will there be a better trained model of this in the future?
|
44 |
+
|
45 |
A: I hope so. I will need to curate a much larger and higher-quality dataset, which might take me a long time. Regardless, I plan on making the control more faithful to the control image and to generalize to more than just checkerboards.
|
46 |
|
47 |
### Sample Outputs:
|
48 |
|
49 |

|
50 |
+
|
51 |
+

|