Upload folder using huggingface_hub
Browse files- main/README.md +222 -117
- main/checkpoint_merger.py +5 -1
main/README.md
CHANGED
@@ -27,25 +27,25 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
|
|
27 |
| Seed Resizing Stable Diffusion | Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/seed_resizing.ipynb) | [Mark Rich](https://github.com/MarkRich) |
|
28 |
| Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image | [Imagic Stable Diffusion](#imagic-stable-diffusion) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/imagic_stable_diffusion.ipynb) | [Mark Rich](https://github.com/MarkRich) |
|
29 |
| Multilingual Stable Diffusion | Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/multilingual_stable_diffusion.ipynb) | [Juan Carlos Piñeros](https://github.com/juancopi81) |
|
30 |
-
| GlueGen Stable Diffusion | Stable Diffusion Pipeline that supports prompts in different languages using GlueGen adapter. | [GlueGen Stable Diffusion](#gluegen-stable-diffusion-pipeline) |
|
31 |
| Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting | [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
|
32 |
-
| Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting | [Text Based Inpainting Stable Diffusion](#text-based-inpainting-stable-diffusion) |
|
33 |
| Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - | [Stuti R.](https://github.com/kingstut) |
|
34 |
| K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
|
35 |
| Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
36 |
| Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Diffusion and compare their results together | [Stable Diffusion Comparison](#stable-diffusion-comparisons) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_diffusion_comparison.ipynb) | [Suvaditya Mukherjee](https://github.com/suvadityamuk) |
|
37 |
-
| MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) |
|
38 |
| Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_unclip.ipynb) | [Ray Wang](https://wrong.wang) |
|
39 |
| UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/unclip_text_interpolation.ipynb)| [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
40 |
| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/unclip_image_interpolation.ipynb)| [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
41 |
| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/ddim_noise_comparative_analysis.ipynb)| [Aengus (Duc-Anh)](https://github.com/aengusng8) |
|
42 |
-
| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) |
|
43 |
| TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
|
44 |
| EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/edict_image_pipeline.ipynb) | [Joqsan Azocar](https://github.com/Joqsan) |
|
45 |
| Stable Diffusion RePaint | Stable Diffusion pipeline using [RePaint](https://arxiv.org/abs/2201.09865) for inpainting. | [Stable Diffusion RePaint](#stable-diffusion-repaint )|[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_diffusion_repaint.ipynb)| [Markus Pobitzer](https://github.com/Markus-Pobitzer) |
|
46 |
| TensorRT Stable Diffusion Image to Image Pipeline | Accelerates the Stable Diffusion Image2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Image to Image Pipeline](#tensorrt-image2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
|
47 |
| Stable Diffusion IPEX Pipeline | Accelerate Stable Diffusion inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion on IPEX](#stable-diffusion-on-ipex) | - | [Yingjie Han](https://github.com/yingjie-han/) |
|
48 |
-
| CLIP Guided Images Mixing Stable Diffusion Pipeline | Сombine images using usual diffusion models. | [CLIP Guided Images Mixing Using Stable Diffusion](#clip-guided-images-mixing-with-stable-diffusion) |
|
49 |
| TensorRT Stable Diffusion Inpainting Pipeline | Accelerates the Stable Diffusion Inpainting Pipeline using TensorRT | [TensorRT Stable Diffusion Inpainting Pipeline](#tensorrt-inpainting-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
|
50 |
| IADB Pipeline | Implementation of [Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) | [IADB Pipeline](#iadb-pipeline) | - | [Thomas Chambon](https://github.com/tchambon)
|
51 |
| Zero1to3 Pipeline | Implementation of [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) | [Zero1to3 Pipeline](#zero1to3-pipeline) | - | [Xin Kong](https://github.com/kxhit) |
|
@@ -81,6 +81,7 @@ PIXART-α Controlnet pipeline | Implementation of the controlnet model for pixar
|
|
81 |
| HunyuanDiT Differential Diffusion Pipeline | Applies [Differential Diffusion](https://github.com/exx8/differential-diffusion) to [HunyuanDiT](https://github.com/huggingface/diffusers/pull/8240). | [HunyuanDiT with Differential Diffusion](#hunyuandit-with-differential-diffusion) | [](https://colab.research.google.com/drive/1v44a5fpzyr4Ffr4v2XBQ7BajzG874N4P?usp=sharing) | [Monjoy Choudhury](https://github.com/MnCSSJ4x) |
|
82 |
| [🪆Matryoshka Diffusion Models](https://huggingface.co/papers/2310.15111) | A diffusion process that denoises inputs at multiple resolutions jointly and uses a NestedUNet architecture where features and parameters for small scale inputs are nested within those of the large scales. See [original codebase](https://github.com/apple/ml-mdm). | [🪆Matryoshka Diffusion Models](#matryoshka-diffusion-models) | [](https://huggingface.co/spaces/pcuenq/mdm) [](https://colab.research.google.com/gist/tolgacangoz/1f54875fc7aeaabcf284ebde64820966/matryoshka_hf.ipynb) | [M. Tolga Cangöz](https://github.com/tolgacangoz) |
|
83 |
| Stable Diffusion XL Attentive Eraser Pipeline |[[AAAI2025 Oral] Attentive Eraser](https://github.com/Anonym0u3/AttentiveEraser) is a novel tuning-free method that enhances object removal capabilities in pre-trained diffusion models.|[Stable Diffusion XL Attentive Eraser Pipeline](#stable-diffusion-xl-attentive-eraser-pipeline)|-|[Wenhao Sun](https://github.com/Anonym0u3) and [Benlei Cui](https://github.com/Benny079)|
|
|
|
84 |
|
85 |
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
|
86 |
|
@@ -1106,38 +1107,100 @@ GlueGen is a minimal adapter that allows alignment between any encoder (Text Enc
|
|
1106 |
Make sure you downloaded `gluenet_French_clip_overnorm_over3_noln.ckpt` for French (there are also pre-trained weights for Chinese, Italian, Japanese, Spanish or train your own) at [GlueGen's official repo](https://github.com/salesforce/GlueGen/tree/main).
|
1107 |
|
1108 |
```python
|
1109 |
-
|
1110 |
-
|
|
|
1111 |
import torch
|
|
|
|
|
1112 |
|
1113 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1114 |
|
1115 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
1116 |
|
1117 |
-
|
1118 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1119 |
|
1120 |
-
|
1121 |
-
|
|
|
|
|
|
|
1122 |
|
1123 |
-
text_encoder = AutoModel.from_pretrained(lm_model_id)
|
1124 |
-
tokenizer = AutoTokenizer.from_pretrained(lm_model_id, model_max_length=token_max_length, use_fast=False)
|
1125 |
|
1126 |
-
|
|
|
1127 |
|
1128 |
-
|
1129 |
-
|
1130 |
-
|
1131 |
-
|
1132 |
-
|
1133 |
-
|
1134 |
-
|
|
|
|
|
|
|
1135 |
|
1136 |
-
|
|
|
1137 |
|
1138 |
-
|
1139 |
-
|
1140 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1141 |
```
|
1142 |
|
1143 |
Which will produce:
|
@@ -1188,28 +1251,49 @@ Currently uses the CLIPSeg model for mask generation, then calls the standard St
|
|
1188 |
```python
|
1189 |
from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation
|
1190 |
from diffusers import DiffusionPipeline
|
1191 |
-
|
1192 |
from PIL import Image
|
1193 |
import requests
|
|
|
1194 |
|
|
|
1195 |
processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
|
1196 |
-
model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
|
1197 |
|
|
|
1198 |
pipe = DiffusionPipeline.from_pretrained(
|
1199 |
"runwayml/stable-diffusion-inpainting",
|
1200 |
custom_pipeline="text_inpainting",
|
1201 |
segmentation_model=model,
|
1202 |
segmentation_processor=processor
|
1203 |
-
)
|
1204 |
-
pipe = pipe.to("cuda")
|
1205 |
-
|
1206 |
|
|
|
1207 |
url = "https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true"
|
1208 |
-
image = Image.open(requests.get(url, stream=True).raw)
|
1209 |
-
|
1210 |
-
|
|
|
1211 |
|
1212 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1213 |
```
|
1214 |
|
1215 |
### Bit Diffusion
|
@@ -1385,8 +1469,10 @@ There are 3 parameters for the method-
|
|
1385 |
Here is an example usage-
|
1386 |
|
1387 |
```python
|
|
|
1388 |
from diffusers import DiffusionPipeline, DDIMScheduler
|
1389 |
from PIL import Image
|
|
|
1390 |
|
1391 |
pipe = DiffusionPipeline.from_pretrained(
|
1392 |
"CompVis/stable-diffusion-v1-4",
|
@@ -1394,9 +1480,11 @@ pipe = DiffusionPipeline.from_pretrained(
|
|
1394 |
scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"),
|
1395 |
).to('cuda')
|
1396 |
|
1397 |
-
|
|
|
|
|
1398 |
mix_img = pipe(
|
1399 |
-
|
1400 |
prompt='bed',
|
1401 |
kmin=0.3,
|
1402 |
kmax=0.5,
|
@@ -1657,37 +1745,51 @@ from diffusers import DiffusionPipeline
|
|
1657 |
from PIL import Image
|
1658 |
from transformers import CLIPImageProcessor, CLIPModel
|
1659 |
|
|
|
1660 |
feature_extractor = CLIPImageProcessor.from_pretrained(
|
1661 |
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
|
1662 |
)
|
1663 |
clip_model = CLIPModel.from_pretrained(
|
1664 |
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16
|
1665 |
)
|
|
|
|
|
1666 |
guided_pipeline = DiffusionPipeline.from_pretrained(
|
1667 |
"CompVis/stable-diffusion-v1-4",
|
1668 |
-
|
1669 |
-
custom_pipeline="/home/njindal/diffusers/examples/community/clip_guided_stable_diffusion.py",
|
1670 |
clip_model=clip_model,
|
1671 |
feature_extractor=feature_extractor,
|
1672 |
torch_dtype=torch.float16,
|
1673 |
)
|
1674 |
guided_pipeline.enable_attention_slicing()
|
1675 |
guided_pipeline = guided_pipeline.to("cuda")
|
|
|
|
|
1676 |
prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
|
1677 |
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
|
1678 |
response = requests.get(url)
|
1679 |
-
|
|
|
|
|
1680 |
image = guided_pipeline(
|
1681 |
prompt=prompt,
|
1682 |
-
|
1683 |
-
|
1684 |
-
|
1685 |
-
|
1686 |
-
|
1687 |
-
|
1688 |
-
|
|
|
|
|
|
|
|
|
|
|
1689 |
).images[0]
|
1690 |
-
|
|
|
|
|
|
|
1691 |
```
|
1692 |
|
1693 |
Init Image
|
@@ -2264,81 +2366,15 @@ CLIP guided stable diffusion images mixing pipeline allows to combine two images
|
|
2264 |
This approach is using (optional) CoCa model to avoid writing image description.
|
2265 |
[More code examples](https://github.com/TheDenk/images_mixing)
|
2266 |
|
2267 |
-
### Stable Diffusion XL Long Weighted Prompt Pipeline
|
2268 |
-
|
2269 |
-
This SDXL pipeline supports unlimited length prompt and negative prompt, compatible with A1111 prompt weighted style.
|
2270 |
-
|
2271 |
-
You can provide both `prompt` and `prompt_2`. If only one prompt is provided, `prompt_2` will be a copy of the provided `prompt`. Here is a sample code to use this pipeline.
|
2272 |
-
|
2273 |
-
```python
|
2274 |
-
from diffusers import DiffusionPipeline
|
2275 |
-
from diffusers.utils import load_image
|
2276 |
-
import torch
|
2277 |
-
|
2278 |
-
pipe = DiffusionPipeline.from_pretrained(
|
2279 |
-
"stabilityai/stable-diffusion-xl-base-1.0"
|
2280 |
-
, torch_dtype = torch.float16
|
2281 |
-
, use_safetensors = True
|
2282 |
-
, variant = "fp16"
|
2283 |
-
, custom_pipeline = "lpw_stable_diffusion_xl",
|
2284 |
-
)
|
2285 |
-
|
2286 |
-
prompt = "photo of a cute (white) cat running on the grass" * 20
|
2287 |
-
prompt2 = "chasing (birds:1.5)" * 20
|
2288 |
-
prompt = f"{prompt},{prompt2}"
|
2289 |
-
neg_prompt = "blur, low quality, carton, animate"
|
2290 |
-
|
2291 |
-
pipe.to("cuda")
|
2292 |
-
|
2293 |
-
# text2img
|
2294 |
-
t2i_images = pipe(
|
2295 |
-
prompt=prompt,
|
2296 |
-
negative_prompt=neg_prompt,
|
2297 |
-
).images # alternatively, you can call the .text2img() function
|
2298 |
-
|
2299 |
-
# img2img
|
2300 |
-
input_image = load_image("/path/to/local/image.png") # or URL to your input image
|
2301 |
-
i2i_images = pipe.img2img(
|
2302 |
-
prompt=prompt,
|
2303 |
-
negative_prompt=neg_prompt,
|
2304 |
-
image=input_image,
|
2305 |
-
strength=0.8, # higher strength will result in more variation compared to original image
|
2306 |
-
).images
|
2307 |
-
|
2308 |
-
# inpaint
|
2309 |
-
input_mask = load_image("/path/to/local/mask.png") # or URL to your input inpainting mask
|
2310 |
-
inpaint_images = pipe.inpaint(
|
2311 |
-
prompt="photo of a cute (black) cat running on the grass" * 20,
|
2312 |
-
negative_prompt=neg_prompt,
|
2313 |
-
image=input_image,
|
2314 |
-
mask=input_mask,
|
2315 |
-
strength=0.6, # higher strength will result in more variation compared to original image
|
2316 |
-
).images
|
2317 |
-
|
2318 |
-
pipe.to("cpu")
|
2319 |
-
torch.cuda.empty_cache()
|
2320 |
-
|
2321 |
-
from IPython.display import display # assuming you are using this code in a notebook
|
2322 |
-
display(t2i_images[0])
|
2323 |
-
display(i2i_images[0])
|
2324 |
-
display(inpaint_images[0])
|
2325 |
-
```
|
2326 |
-
|
2327 |
-
In the above code, the `prompt2` is appended to the `prompt`, which is more than 77 tokens. "birds" are showing up in the result.
|
2328 |
-

|
2329 |
-
|
2330 |
-
For more results, checkout [PR #6114](https://github.com/huggingface/diffusers/pull/6114).
|
2331 |
-
|
2332 |
### Example Images Mixing (with CoCa)
|
2333 |
|
2334 |
```python
|
2335 |
-
import requests
|
2336 |
-
from io import BytesIO
|
2337 |
-
|
2338 |
import PIL
|
2339 |
import torch
|
|
|
2340 |
import open_clip
|
2341 |
from open_clip import SimpleTokenizer
|
|
|
2342 |
from diffusers import DiffusionPipeline
|
2343 |
from transformers import CLIPImageProcessor, CLIPModel
|
2344 |
|
@@ -2401,10 +2437,79 @@ pipe_images = mixing_pipeline(
|
|
2401 |
clip_guidance_scale=100,
|
2402 |
generator=generator,
|
2403 |
).images
|
|
|
|
|
|
|
|
|
2404 |
```
|
2405 |
|
2406 |

|
2407 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2408 |
### Stable Diffusion Mixture Tiling Pipeline SD 1.5
|
2409 |
|
2410 |
This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
|
|
|
27 |
| Seed Resizing Stable Diffusion | Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/seed_resizing.ipynb) | [Mark Rich](https://github.com/MarkRich) |
|
28 |
| Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image | [Imagic Stable Diffusion](#imagic-stable-diffusion) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/imagic_stable_diffusion.ipynb) | [Mark Rich](https://github.com/MarkRich) |
|
29 |
| Multilingual Stable Diffusion | Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/multilingual_stable_diffusion.ipynb) | [Juan Carlos Piñeros](https://github.com/juancopi81) |
|
30 |
+
| GlueGen Stable Diffusion | Stable Diffusion Pipeline that supports prompts in different languages using GlueGen adapter. | [GlueGen Stable Diffusion](#gluegen-stable-diffusion-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/gluegen_stable_diffusion.ipynb) | [Phạm Hồng Vinh](https://github.com/rootonchair) |
|
31 |
| Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting | [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
|
32 |
+
| Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting | [Text Based Inpainting Stable Diffusion](#text-based-inpainting-stable-diffusion) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/text_based_inpainting_stable_dffusion.ipynb) | [Dhruv Karan](https://github.com/unography) |
|
33 |
| Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - | [Stuti R.](https://github.com/kingstut) |
|
34 |
| K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
|
35 |
| Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
36 |
| Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Diffusion and compare their results together | [Stable Diffusion Comparison](#stable-diffusion-comparisons) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_diffusion_comparison.ipynb) | [Suvaditya Mukherjee](https://github.com/suvadityamuk) |
|
37 |
+
| MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/magic_mix.ipynb) | [Partho Das](https://github.com/daspartho) |
|
38 |
| Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_unclip.ipynb) | [Ray Wang](https://wrong.wang) |
|
39 |
| UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/unclip_text_interpolation.ipynb)| [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
40 |
| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/unclip_image_interpolation.ipynb)| [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
41 |
| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/ddim_noise_comparative_analysis.ipynb)| [Aengus (Duc-Anh)](https://github.com/aengusng8) |
|
42 |
+
| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/clip_guided_img2img_stable_diffusion.ipynb) | [Nipun Jindal](https://github.com/nipunjindal/) |
|
43 |
| TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
|
44 |
| EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/edict_image_pipeline.ipynb) | [Joqsan Azocar](https://github.com/Joqsan) |
|
45 |
| Stable Diffusion RePaint | Stable Diffusion pipeline using [RePaint](https://arxiv.org/abs/2201.09865) for inpainting. | [Stable Diffusion RePaint](#stable-diffusion-repaint )|[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_diffusion_repaint.ipynb)| [Markus Pobitzer](https://github.com/Markus-Pobitzer) |
|
46 |
| TensorRT Stable Diffusion Image to Image Pipeline | Accelerates the Stable Diffusion Image2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Image to Image Pipeline](#tensorrt-image2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
|
47 |
| Stable Diffusion IPEX Pipeline | Accelerate Stable Diffusion inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion on IPEX](#stable-diffusion-on-ipex) | - | [Yingjie Han](https://github.com/yingjie-han/) |
|
48 |
+
| CLIP Guided Images Mixing Stable Diffusion Pipeline | Сombine images using usual diffusion models. | [CLIP Guided Images Mixing Using Stable Diffusion](#clip-guided-images-mixing-with-stable-diffusion) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/clip_guided_images_mixing_with_stable_diffusion.ipynb) | [Karachev Denis](https://github.com/TheDenk) |
|
49 |
| TensorRT Stable Diffusion Inpainting Pipeline | Accelerates the Stable Diffusion Inpainting Pipeline using TensorRT | [TensorRT Stable Diffusion Inpainting Pipeline](#tensorrt-inpainting-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
|
50 |
| IADB Pipeline | Implementation of [Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) | [IADB Pipeline](#iadb-pipeline) | - | [Thomas Chambon](https://github.com/tchambon)
|
51 |
| Zero1to3 Pipeline | Implementation of [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) | [Zero1to3 Pipeline](#zero1to3-pipeline) | - | [Xin Kong](https://github.com/kxhit) |
|
|
|
81 |
| HunyuanDiT Differential Diffusion Pipeline | Applies [Differential Diffusion](https://github.com/exx8/differential-diffusion) to [HunyuanDiT](https://github.com/huggingface/diffusers/pull/8240). | [HunyuanDiT with Differential Diffusion](#hunyuandit-with-differential-diffusion) | [](https://colab.research.google.com/drive/1v44a5fpzyr4Ffr4v2XBQ7BajzG874N4P?usp=sharing) | [Monjoy Choudhury](https://github.com/MnCSSJ4x) |
|
82 |
| [🪆Matryoshka Diffusion Models](https://huggingface.co/papers/2310.15111) | A diffusion process that denoises inputs at multiple resolutions jointly and uses a NestedUNet architecture where features and parameters for small scale inputs are nested within those of the large scales. See [original codebase](https://github.com/apple/ml-mdm). | [🪆Matryoshka Diffusion Models](#matryoshka-diffusion-models) | [](https://huggingface.co/spaces/pcuenq/mdm) [](https://colab.research.google.com/gist/tolgacangoz/1f54875fc7aeaabcf284ebde64820966/matryoshka_hf.ipynb) | [M. Tolga Cangöz](https://github.com/tolgacangoz) |
|
83 |
| Stable Diffusion XL Attentive Eraser Pipeline |[[AAAI2025 Oral] Attentive Eraser](https://github.com/Anonym0u3/AttentiveEraser) is a novel tuning-free method that enhances object removal capabilities in pre-trained diffusion models.|[Stable Diffusion XL Attentive Eraser Pipeline](#stable-diffusion-xl-attentive-eraser-pipeline)|-|[Wenhao Sun](https://github.com/Anonym0u3) and [Benlei Cui](https://github.com/Benny079)|
|
84 |
+
| Perturbed-Attention Guidance |StableDiffusionPAGPipeline is a modification of StableDiffusionPipeline to support Perturbed-Attention Guidance (PAG).|[Perturbed-Attention Guidance](#perturbed-attention-guidance)|[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/perturbed_attention_guidance.ipynb)|[Hyoungwon Cho](https://github.com/HyoungwonCho)|
|
85 |
|
86 |
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
|
87 |
|
|
|
1107 |
Make sure you downloaded `gluenet_French_clip_overnorm_over3_noln.ckpt` for French (there are also pre-trained weights for Chinese, Italian, Japanese, Spanish or train your own) at [GlueGen's official repo](https://github.com/salesforce/GlueGen/tree/main).
|
1108 |
|
1109 |
```python
|
1110 |
+
import os
|
1111 |
+
import gc
|
1112 |
+
import urllib.request
|
1113 |
import torch
|
1114 |
+
from transformers import XLMRobertaTokenizer, XLMRobertaForMaskedLM, CLIPTokenizer, CLIPTextModel
|
1115 |
+
from diffusers import DiffusionPipeline
|
1116 |
|
1117 |
+
# Download checkpoints
|
1118 |
+
CHECKPOINTS = [
|
1119 |
+
"https://storage.googleapis.com/sfr-gluegen-data-research/checkpoints_all/gluenet_checkpoint/gluenet_Chinese_clip_overnorm_over3_noln.ckpt",
|
1120 |
+
"https://storage.googleapis.com/sfr-gluegen-data-research/checkpoints_all/gluenet_checkpoint/gluenet_French_clip_overnorm_over3_noln.ckpt",
|
1121 |
+
"https://storage.googleapis.com/sfr-gluegen-data-research/checkpoints_all/gluenet_checkpoint/gluenet_Italian_clip_overnorm_over3_noln.ckpt",
|
1122 |
+
"https://storage.googleapis.com/sfr-gluegen-data-research/checkpoints_all/gluenet_checkpoint/gluenet_Japanese_clip_overnorm_over3_noln.ckpt",
|
1123 |
+
"https://storage.googleapis.com/sfr-gluegen-data-research/checkpoints_all/gluenet_checkpoint/gluenet_Spanish_clip_overnorm_over3_noln.ckpt",
|
1124 |
+
"https://storage.googleapis.com/sfr-gluegen-data-research/checkpoints_all/gluenet_checkpoint/gluenet_sound2img_audioclip_us8k.ckpt"
|
1125 |
+
]
|
1126 |
|
1127 |
+
LANGUAGE_PROMPTS = {
|
1128 |
+
"French": "une voiture sur la plage",
|
1129 |
+
#"Chinese": "海滩上的一辆车",
|
1130 |
+
#"Italian": "una macchina sulla spiaggia",
|
1131 |
+
#"Japanese": "浜辺の車",
|
1132 |
+
#"Spanish": "un coche en la playa"
|
1133 |
+
}
|
1134 |
|
1135 |
+
def download_checkpoints(checkpoint_dir):
|
1136 |
+
os.makedirs(checkpoint_dir, exist_ok=True)
|
1137 |
+
for url in CHECKPOINTS:
|
1138 |
+
filename = os.path.join(checkpoint_dir, os.path.basename(url))
|
1139 |
+
if not os.path.exists(filename):
|
1140 |
+
print(f"Downloading {filename}...")
|
1141 |
+
urllib.request.urlretrieve(url, filename)
|
1142 |
+
print(f"Downloaded {filename}")
|
1143 |
+
else:
|
1144 |
+
print(f"Checkpoint {filename} already exists, skipping download.")
|
1145 |
+
return checkpoint_dir
|
1146 |
+
|
1147 |
+
def load_checkpoint(pipeline, checkpoint_path, device):
|
1148 |
+
state_dict = torch.load(checkpoint_path, map_location=device)
|
1149 |
+
state_dict = state_dict.get("state_dict", state_dict)
|
1150 |
+
missing_keys, unexpected_keys = pipeline.unet.load_state_dict(state_dict, strict=False)
|
1151 |
+
return pipeline
|
1152 |
+
|
1153 |
+
def generate_image(pipeline, prompt, device, output_path):
|
1154 |
+
with torch.inference_mode():
|
1155 |
+
image = pipeline(
|
1156 |
+
prompt,
|
1157 |
+
generator=torch.Generator(device=device).manual_seed(42),
|
1158 |
+
num_inference_steps=50
|
1159 |
+
).images[0]
|
1160 |
+
image.save(output_path)
|
1161 |
+
print(f"Image saved to {output_path}")
|
1162 |
+
|
1163 |
+
checkpoint_dir = download_checkpoints("./checkpoints_all/gluenet_checkpoint")
|
1164 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
1165 |
+
print(f"Using device: {device}")
|
1166 |
|
1167 |
+
tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base", use_fast=False)
|
1168 |
+
model = XLMRobertaForMaskedLM.from_pretrained("xlm-roberta-base").to(device)
|
1169 |
+
inputs = tokenizer("Ceci est une phrase incomplète avec un [MASK].", return_tensors="pt").to(device)
|
1170 |
+
with torch.inference_mode():
|
1171 |
+
_ = model(**inputs)
|
1172 |
|
|
|
|
|
1173 |
|
1174 |
+
clip_tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
|
1175 |
+
clip_text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14").to(device)
|
1176 |
|
1177 |
+
# Initialize pipeline
|
1178 |
+
pipeline = DiffusionPipeline.from_pretrained(
|
1179 |
+
"stable-diffusion-v1-5/stable-diffusion-v1-5",
|
1180 |
+
text_encoder=clip_text_encoder,
|
1181 |
+
tokenizer=clip_tokenizer,
|
1182 |
+
custom_pipeline="gluegen",
|
1183 |
+
safety_checker=None
|
1184 |
+
).to(device)
|
1185 |
+
|
1186 |
+
os.makedirs("outputs", exist_ok=True)
|
1187 |
|
1188 |
+
# Generate images
|
1189 |
+
for language, prompt in LANGUAGE_PROMPTS.items():
|
1190 |
|
1191 |
+
checkpoint_file = f"gluenet_{language}_clip_overnorm_over3_noln.ckpt"
|
1192 |
+
checkpoint_path = os.path.join(checkpoint_dir, checkpoint_file)
|
1193 |
+
try:
|
1194 |
+
pipeline = load_checkpoint(pipeline, checkpoint_path, device)
|
1195 |
+
output_path = f"outputs/gluegen_output_{language.lower()}.png"
|
1196 |
+
generate_image(pipeline, prompt, device, output_path)
|
1197 |
+
except Exception as e:
|
1198 |
+
print(f"Error processing {language} model: {e}")
|
1199 |
+
continue
|
1200 |
+
|
1201 |
+
if torch.cuda.is_available():
|
1202 |
+
torch.cuda.empty_cache()
|
1203 |
+
gc.collect()
|
1204 |
```
|
1205 |
|
1206 |
Which will produce:
|
|
|
1251 |
```python
|
1252 |
from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation
|
1253 |
from diffusers import DiffusionPipeline
|
|
|
1254 |
from PIL import Image
|
1255 |
import requests
|
1256 |
+
import torch
|
1257 |
|
1258 |
+
# Load CLIPSeg model and processor
|
1259 |
processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
|
1260 |
+
model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined").to("cuda")
|
1261 |
|
1262 |
+
# Load Stable Diffusion Inpainting Pipeline with custom pipeline
|
1263 |
pipe = DiffusionPipeline.from_pretrained(
|
1264 |
"runwayml/stable-diffusion-inpainting",
|
1265 |
custom_pipeline="text_inpainting",
|
1266 |
segmentation_model=model,
|
1267 |
segmentation_processor=processor
|
1268 |
+
).to("cuda")
|
|
|
|
|
1269 |
|
1270 |
+
# Load input image
|
1271 |
url = "https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true"
|
1272 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
1273 |
+
|
1274 |
+
# Step 1: Resize input image for CLIPSeg (224x224)
|
1275 |
+
segmentation_input = image.resize((224, 224))
|
1276 |
|
1277 |
+
# Step 2: Generate segmentation mask
|
1278 |
+
text = "a glass" # Object to mask
|
1279 |
+
inputs = processor(text=text, images=segmentation_input, return_tensors="pt").to("cuda")
|
1280 |
+
|
1281 |
+
with torch.no_grad():
|
1282 |
+
mask = model(**inputs).logits.sigmoid() # Get segmentation mask
|
1283 |
+
|
1284 |
+
# Resize mask back to 512x512 for SD inpainting
|
1285 |
+
mask = torch.nn.functional.interpolate(mask.unsqueeze(0), size=(512, 512), mode="bilinear").squeeze(0)
|
1286 |
+
|
1287 |
+
# Step 3: Resize input image for Stable Diffusion
|
1288 |
+
image = image.resize((512, 512))
|
1289 |
+
|
1290 |
+
# Step 4: Run inpainting with Stable Diffusion
|
1291 |
+
prompt = "a cup" # The masked-out region will be replaced with this
|
1292 |
+
result = pipe(image=image, mask=mask, prompt=prompt,text=text).images[0]
|
1293 |
+
|
1294 |
+
# Save output
|
1295 |
+
result.save("inpainting_output.png")
|
1296 |
+
print("Inpainting completed. Image saved as 'inpainting_output.png'.")
|
1297 |
```
|
1298 |
|
1299 |
### Bit Diffusion
|
|
|
1469 |
Here is an example usage-
|
1470 |
|
1471 |
```python
|
1472 |
+
import requests
|
1473 |
from diffusers import DiffusionPipeline, DDIMScheduler
|
1474 |
from PIL import Image
|
1475 |
+
from io import BytesIO
|
1476 |
|
1477 |
pipe = DiffusionPipeline.from_pretrained(
|
1478 |
"CompVis/stable-diffusion-v1-4",
|
|
|
1480 |
scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"),
|
1481 |
).to('cuda')
|
1482 |
|
1483 |
+
url = "https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg"
|
1484 |
+
response = requests.get(url)
|
1485 |
+
image = Image.open(BytesIO(response.content)).convert("RGB") # Convert to RGB to avoid issues
|
1486 |
mix_img = pipe(
|
1487 |
+
image,
|
1488 |
prompt='bed',
|
1489 |
kmin=0.3,
|
1490 |
kmax=0.5,
|
|
|
1745 |
from PIL import Image
|
1746 |
from transformers import CLIPImageProcessor, CLIPModel
|
1747 |
|
1748 |
+
# Load CLIP model and feature extractor
|
1749 |
feature_extractor = CLIPImageProcessor.from_pretrained(
|
1750 |
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
|
1751 |
)
|
1752 |
clip_model = CLIPModel.from_pretrained(
|
1753 |
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16
|
1754 |
)
|
1755 |
+
|
1756 |
+
# Load guided pipeline
|
1757 |
guided_pipeline = DiffusionPipeline.from_pretrained(
|
1758 |
"CompVis/stable-diffusion-v1-4",
|
1759 |
+
custom_pipeline="clip_guided_stable_diffusion_img2img",
|
|
|
1760 |
clip_model=clip_model,
|
1761 |
feature_extractor=feature_extractor,
|
1762 |
torch_dtype=torch.float16,
|
1763 |
)
|
1764 |
guided_pipeline.enable_attention_slicing()
|
1765 |
guided_pipeline = guided_pipeline.to("cuda")
|
1766 |
+
|
1767 |
+
# Define prompt and fetch image
|
1768 |
prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
|
1769 |
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
|
1770 |
response = requests.get(url)
|
1771 |
+
edit_image = Image.open(BytesIO(response.content)).convert("RGB")
|
1772 |
+
|
1773 |
+
# Run the pipeline
|
1774 |
image = guided_pipeline(
|
1775 |
prompt=prompt,
|
1776 |
+
height=512, # Height of the output image
|
1777 |
+
width=512, # Width of the output image
|
1778 |
+
image=edit_image, # Input image to guide the diffusion
|
1779 |
+
strength=0.75, # How much to transform the input image
|
1780 |
+
num_inference_steps=30, # Number of diffusion steps
|
1781 |
+
guidance_scale=7.5, # Scale of the classifier-free guidance
|
1782 |
+
clip_guidance_scale=100, # Scale of the CLIP guidance
|
1783 |
+
num_images_per_prompt=1, # Generate one image per prompt
|
1784 |
+
eta=0.0, # Noise scheduling parameter
|
1785 |
+
num_cutouts=4, # Number of cutouts for CLIP guidance
|
1786 |
+
use_cutouts=False, # Whether to use cutouts
|
1787 |
+
output_type="pil", # Output as PIL image
|
1788 |
).images[0]
|
1789 |
+
|
1790 |
+
# Display the generated image
|
1791 |
+
image.show()
|
1792 |
+
|
1793 |
```
|
1794 |
|
1795 |
Init Image
|
|
|
2366 |
This approach is using (optional) CoCa model to avoid writing image description.
|
2367 |
[More code examples](https://github.com/TheDenk/images_mixing)
|
2368 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2369 |
### Example Images Mixing (with CoCa)
|
2370 |
|
2371 |
```python
|
|
|
|
|
|
|
2372 |
import PIL
|
2373 |
import torch
|
2374 |
+
import requests
|
2375 |
import open_clip
|
2376 |
from open_clip import SimpleTokenizer
|
2377 |
+
from io import BytesIO
|
2378 |
from diffusers import DiffusionPipeline
|
2379 |
from transformers import CLIPImageProcessor, CLIPModel
|
2380 |
|
|
|
2437 |
clip_guidance_scale=100,
|
2438 |
generator=generator,
|
2439 |
).images
|
2440 |
+
|
2441 |
+
output_path = "mixed_output.jpg"
|
2442 |
+
pipe_images[0].save(output_path)
|
2443 |
+
print(f"Image saved successfully at {output_path}")
|
2444 |
```
|
2445 |
|
2446 |

|
2447 |
|
2448 |
+
### Stable Diffusion XL Long Weighted Prompt Pipeline
|
2449 |
+
|
2450 |
+
This SDXL pipeline supports unlimited length prompt and negative prompt, compatible with A1111 prompt weighted style.
|
2451 |
+
|
2452 |
+
You can provide both `prompt` and `prompt_2`. If only one prompt is provided, `prompt_2` will be a copy of the provided `prompt`. Here is a sample code to use this pipeline.
|
2453 |
+
|
2454 |
+
```python
|
2455 |
+
from diffusers import DiffusionPipeline
|
2456 |
+
from diffusers.utils import load_image
|
2457 |
+
import torch
|
2458 |
+
|
2459 |
+
pipe = DiffusionPipeline.from_pretrained(
|
2460 |
+
"stabilityai/stable-diffusion-xl-base-1.0"
|
2461 |
+
, torch_dtype = torch.float16
|
2462 |
+
, use_safetensors = True
|
2463 |
+
, variant = "fp16"
|
2464 |
+
, custom_pipeline = "lpw_stable_diffusion_xl",
|
2465 |
+
)
|
2466 |
+
|
2467 |
+
prompt = "photo of a cute (white) cat running on the grass" * 20
|
2468 |
+
prompt2 = "chasing (birds:1.5)" * 20
|
2469 |
+
prompt = f"{prompt},{prompt2}"
|
2470 |
+
neg_prompt = "blur, low quality, carton, animate"
|
2471 |
+
|
2472 |
+
pipe.to("cuda")
|
2473 |
+
|
2474 |
+
# text2img
|
2475 |
+
t2i_images = pipe(
|
2476 |
+
prompt=prompt,
|
2477 |
+
negative_prompt=neg_prompt,
|
2478 |
+
).images # alternatively, you can call the .text2img() function
|
2479 |
+
|
2480 |
+
# img2img
|
2481 |
+
input_image = load_image("/path/to/local/image.png") # or URL to your input image
|
2482 |
+
i2i_images = pipe.img2img(
|
2483 |
+
prompt=prompt,
|
2484 |
+
negative_prompt=neg_prompt,
|
2485 |
+
image=input_image,
|
2486 |
+
strength=0.8, # higher strength will result in more variation compared to original image
|
2487 |
+
).images
|
2488 |
+
|
2489 |
+
# inpaint
|
2490 |
+
input_mask = load_image("/path/to/local/mask.png") # or URL to your input inpainting mask
|
2491 |
+
inpaint_images = pipe.inpaint(
|
2492 |
+
prompt="photo of a cute (black) cat running on the grass" * 20,
|
2493 |
+
negative_prompt=neg_prompt,
|
2494 |
+
image=input_image,
|
2495 |
+
mask=input_mask,
|
2496 |
+
strength=0.6, # higher strength will result in more variation compared to original image
|
2497 |
+
).images
|
2498 |
+
|
2499 |
+
pipe.to("cpu")
|
2500 |
+
torch.cuda.empty_cache()
|
2501 |
+
|
2502 |
+
from IPython.display import display # assuming you are using this code in a notebook
|
2503 |
+
display(t2i_images[0])
|
2504 |
+
display(i2i_images[0])
|
2505 |
+
display(inpaint_images[0])
|
2506 |
+
```
|
2507 |
+
|
2508 |
+
In the above code, the `prompt2` is appended to the `prompt`, which is more than 77 tokens. "birds" are showing up in the result.
|
2509 |
+

|
2510 |
+
|
2511 |
+
For more results, checkout [PR #6114](https://github.com/huggingface/diffusers/pull/6114).
|
2512 |
+
|
2513 |
### Stable Diffusion Mixture Tiling Pipeline SD 1.5
|
2514 |
|
2515 |
This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
|
main/checkpoint_merger.py
CHANGED
@@ -92,9 +92,13 @@ class CheckpointMergerPipeline(DiffusionPipeline):
|
|
92 |
token = kwargs.pop("token", None)
|
93 |
variant = kwargs.pop("variant", None)
|
94 |
revision = kwargs.pop("revision", None)
|
95 |
-
torch_dtype = kwargs.pop("torch_dtype",
|
96 |
device_map = kwargs.pop("device_map", None)
|
97 |
|
|
|
|
|
|
|
|
|
98 |
alpha = kwargs.pop("alpha", 0.5)
|
99 |
interp = kwargs.pop("interp", None)
|
100 |
|
|
|
92 |
token = kwargs.pop("token", None)
|
93 |
variant = kwargs.pop("variant", None)
|
94 |
revision = kwargs.pop("revision", None)
|
95 |
+
torch_dtype = kwargs.pop("torch_dtype", torch.float32)
|
96 |
device_map = kwargs.pop("device_map", None)
|
97 |
|
98 |
+
if not isinstance(torch_dtype, torch.dtype):
|
99 |
+
torch_dtype = torch.float32
|
100 |
+
print(f"Passed `torch_dtype` {torch_dtype} is not a `torch.dtype`. Defaulting to `torch.float32`.")
|
101 |
+
|
102 |
alpha = kwargs.pop("alpha", 0.5)
|
103 |
interp = kwargs.pop("interp", None)
|
104 |
|