Adapter commited on
Commit
35ca69c
·
1 Parent(s): bc8cd88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -22
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
 
13
  T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
14
 
15
- This checkpoint provides conditioning on canny for the StableDiffusionXL checkpoint.
16
 
17
  ## Model Details
18
  - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
@@ -35,10 +35,12 @@ This checkpoint provides conditioning on canny for the StableDiffusionXL checkpo
35
 
36
  | Model Name | Control Image Overview| Control Image Example | Generated Image Example |
37
  |---|---|---|---|
38
- |[Adapter/t2iadapter_canny_sdxlv1](https://huggingface.co/Adapter/t2iadapter_canny_sdxlv1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href=""><img width="64" style="margin:0;padding:0;" src=""/></a>|<a href=""><img width="64" src=""/></a>|
39
- |[Adapter/t2iadapter_sketch_sdxlv1](https://huggingface.co/Adapter/t2iadapter_sketch_sdxlv1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href=""><img width="64" style="margin:0;padding:0;" src=""/></a>|<a href=""><img width="64" src=""/></a>|
40
- |[Adapter/t2iadapter_depth_sdxlv1](https://huggingface.co/Adapter/t2iadapter_depth_sdxlv1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href=""><img width="64" src=""/></a>|<a href=""><img width="64" src=""/></a>|
41
- |[Adapter/t2iadapter_openpose_sdxlv1](https://huggingface.co/Adapter/t2iadapter_openpose_sdxlv1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href=""><img width="64" src=""/></a>|<a href=""><img width="64" src=""/></a>|
 
 
42
 
43
 
44
  ## Example
@@ -54,40 +56,45 @@ pip install transformers accelerate safetensors
54
  1. Images are first downloaded into the appropriate *control image* format.
55
  2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
56
 
57
- Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/Adapter/t2iadapter_canny_sdxlv1).
58
 
 
59
  ```py
60
- from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler
61
  from diffusers.utils import load_image, make_image_grid
62
- from controlnet_aux.zoe import ZoeDetector
 
63
 
64
  # load adapter
65
  adapter = T2IAdapter.from_pretrained(
66
- "Adapter/t2i-adapter-depth-zoe-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
67
  ).to("cuda")
68
 
69
  # load euler_a scheduler
70
  model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
71
  euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
72
- vae= AutoencoderKL.from_pretrained(
73
- "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16
74
- )
75
  pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
76
  model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
77
  ).to("cuda")
78
  pipe.enable_xformers_memory_efficient_attention()
79
 
80
-
81
  zoe_depth = ZoeDetector.from_pretrained(
82
  "valhalla/t2iadapter-aux-models", filename="zoed_nk.pth", model_type="zoedepth_nk"
83
  ).to("cuda")
 
84
 
85
-
86
- url = "https://raw.githubusercontent.com/lllyasviel/ControlNet/main/test_imgs/cyber.png"
 
87
  image = load_image(url)
88
- image = zoe_depth(image, gamma_corrected=True).resize((896, 1152))
 
 
89
 
90
- prompt = "a robot, mount fuji in the background, 4k photo, highly detailed"
 
 
91
  negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
92
 
93
  gen_images = pipe(
@@ -95,8 +102,9 @@ gen_images = pipe(
95
  negative_prompt=negative_prompt,
96
  image=image,
97
  num_inference_steps=30,
98
- adapter_conditioning_scale=1,
99
- cond_tau=1
100
- ).images
101
- gen_images[0]
102
- ```
 
 
12
 
13
  T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
14
 
15
+ This checkpoint provides conditioning on depth for the StableDiffusionXL checkpoint.
16
 
17
  ## Model Details
18
  - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
 
35
 
36
  | Model Name | Control Image Overview| Control Image Example | Generated Image Example |
37
  |---|---|---|---|
38
+ |[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>|
39
+ |[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>|
40
+ |[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>|
41
+ |[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>|
42
+ |[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>|
43
+ |[Adapter/t2iadapter_openpose_sdxlv1](https://huggingface.co/Adapter/t2iadapter_openpose_sdxlv1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>|
44
 
45
 
46
  ## Example
 
56
  1. Images are first downloaded into the appropriate *control image* format.
57
  2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
58
 
59
+ Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0).
60
 
61
+ - Dependency
62
  ```py
63
+ from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL
64
  from diffusers.utils import load_image, make_image_grid
65
+ from controlnet_aux import ZoeDetector
66
+ import torch
67
 
68
  # load adapter
69
  adapter = T2IAdapter.from_pretrained(
70
+ "TencentARC/t2i-adapter-depth-zoe-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
71
  ).to("cuda")
72
 
73
  # load euler_a scheduler
74
  model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
75
  euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
76
+ vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
 
 
77
  pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
78
  model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
79
  ).to("cuda")
80
  pipe.enable_xformers_memory_efficient_attention()
81
 
 
82
  zoe_depth = ZoeDetector.from_pretrained(
83
  "valhalla/t2iadapter-aux-models", filename="zoed_nk.pth", model_type="zoedepth_nk"
84
  ).to("cuda")
85
+ ```
86
 
87
+ - Condition Image
88
+ ```py
89
+ url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_zeo.jpg"
90
  image = load_image(url)
91
+ image = zoe_depth(image, gamma_corrected=True, detect_resolution=512, image_resolution=1024)
92
+ ```
93
+ <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>
94
 
95
+ - Generation
96
+ ```py
97
+ prompt = "A photo of a orchid, 4k photo, highly detailed"
98
  negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
99
 
100
  gen_images = pipe(
 
102
  negative_prompt=negative_prompt,
103
  image=image,
104
  num_inference_steps=30,
105
+ adapter_conditioning_scale=1,
106
+ guidance_scale=7.5,
107
+ ).images[0]
108
+ gen_images.save('out_zoe.png')
109
+ ```
110
+ <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>