|
--- |
|
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5 |
|
inference: true |
|
license: mit |
|
library_name: diffusers |
|
instance_prompt: a professional studio photograph of an attractive model wearing a |
|
teal top with lace detail |
|
tags: |
|
- stable-diffusion |
|
- stable-diffusion-diffusers |
|
- text-to-image |
|
- diffusers |
|
- controlnet |
|
- diffusers-training |
|
- stable-diffusion |
|
- stable-diffusion-diffusers |
|
- text-to-image |
|
- diffusers |
|
- controlnet |
|
- diffusers-training |
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the training script had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# ControlNet for cloth- Docty/cloth_controlnet |
|
|
|
These are ControlNet for stable-diffusion-v1-5/stable-diffusion-v1-5. You can find some example images in the following. |
|
|
|
 |
|
 |
|
 |
|
|
|
```python |
|
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler |
|
from diffusers.utils import load_image |
|
import torch |
|
|
|
base_model_path = "stable-diffusion-v1-5/stable-diffusion-v1-5" |
|
controlnet_path = "Docty/cloth_controlnet" |
|
|
|
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16) |
|
pipe = StableDiffusionControlNetPipeline.from_pretrained( |
|
base_model_path, controlnet=controlnet, torch_dtype=torch.float16 |
|
) |
|
|
|
# speed up diffusion process with faster scheduler and memory optimization |
|
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) |
|
# remove following line if xformers is not installed or when using Torch 2.0. |
|
#pipe.enable_xformers_memory_efficient_attention() |
|
# memory optimization. |
|
pipe.enable_model_cpu_offload() |
|
|
|
control_image = load_image("./cond1.jpg") |
|
prompt = "a professional studio photograph of an attractive model wearing a teal top with lace detail" |
|
|
|
# generate image |
|
#generator = torch.manual_seed(0) |
|
image = pipe( |
|
prompt, num_inference_steps=20, image=control_image |
|
).images[0] |
|
image |
|
|
|
``` |