File size: 2,021 Bytes
a33a965 9d4065c a33a965 9d4065c a33a965 04c5c6d a33a965 6c53cb7 46d29fb 04c5c6d ae7d162 040addf 04c5c6d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
inference: true
license: mit
library_name: diffusers
instance_prompt: a professional studio photograph of an attractive model wearing a
teal top with lace detail
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ControlNet for cloth- Docty/cloth_controlnet
These are ControlNet for stable-diffusion-v1-5/stable-diffusion-v1-5. You can find some example images in the following.



```python
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
from diffusers.utils import load_image
import torch
base_model_path = "stable-diffusion-v1-5/stable-diffusion-v1-5"
controlnet_path = "Docty/cloth_controlnet"
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
base_model_path, controlnet=controlnet, torch_dtype=torch.float16
)
# speed up diffusion process with faster scheduler and memory optimization
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# remove following line if xformers is not installed or when using Torch 2.0.
#pipe.enable_xformers_memory_efficient_attention()
# memory optimization.
pipe.enable_model_cpu_offload()
control_image = load_image("./cond1.jpg")
prompt = "a professional studio photograph of an attractive model wearing a teal top with lace detail"
# generate image
#generator = torch.manual_seed(0)
image = pipe(
prompt, num_inference_steps=20, image=control_image
).images[0]
image
``` |