File size: 69,760 Bytes
3e88ee7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
<!--Copyright 2022 The HuggingFace Team. All rights reserved.                                                                                                                                                                                 
                                                                                                                                                                                                                                              
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with                                                                                                                           
the License. You may obtain a copy of the License at                                                                                                                                                                                          
                                                                                                                                                                                                                                              
http://www.apache.org/licenses/LICENSE-2.0                                                                                                                                                                                                    
                                                                                                                                                                                                                                              
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on                                                                                                                           
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the                                                                                                                            
specific language governing permissions and limitations under the License.                                                                                                                                                                    
-->                                                                                                                                                                                                                                           
                                                                                                                                                                                                                                              
# The Stable Diffusion Guide 🎨                                                                                                                                                                                                           
<a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_101_guide.ipynb">                                                                                                                                                                                                                                                                                                                                                            
    <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>                                                                                                                                                 
</a>                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    

## Intro                                                                                                                                                                                                                                                                                                                                                                                                                                                                                

Stable Diffusion is a [Latent Diffusion model](https://github.com/CompVis/latent-diffusion) developed by researchers from the Machine Vision and Learning group at LMU Munich, *a.k.a* CompVis.                                                                                                                                                                                                                                                                                             
Model checkpoints were publicly released at the end of August 2022 by a collaboration of Stability AI, CompVis, and Runway with support from EleutherAI and LAION. For more information, you can check out [the official blog post](https://stability.ai/blog/stable-diffusion-public-release).                                                                                                                                                                                                  
                                                                                                                                                                                                                                              
Since its public release the community has done an incredible job at working together to make the stable diffusion checkpoints **faster**, **more memory efficient**, and **more performant**.                                                                                                                                                                                                                                                                                              
                                                                                                                                                                                                                                              
🧨 Diffusers offers a simple API to run stable diffusion with all memory, computing, and quality improvements.                                                                                                                                   
                                                                                                                                                                                                                                              
This notebook walks you through the improvements one-by-one so you can best leverage [`StableDiffusionPipeline`] for **inference**.                                                                                                          
                                                                                                                                                                                                                                              
## Prompt Engineering 🎨                                                                                                                                                                                                                      
                                                                                                                                                                                                                                              
When running *Stable Diffusion* in inference, we usually want to generate a certain type, or style of image and then improve upon it. Improving upon a previously generated image means running inference over and over again with a different prompt and potentially a different seed until we are happy with our generation.                                                                                                                                                                 

So to begin with, it is most important to speed up stable diffusion as much as possible to generate as many pictures as possible in a given amount of time.                                                                                   

This can be done by both improving the **computational efficiency** (speed) and the **memory efficiency** (GPU RAM).                                                                                                                          

Let's start by looking into computational efficiency first.                                                                                                                                                                               

Throughout the notebook, we will focus on [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5):                                                                                                           

``` python                                                                                                                                                                                                                                    
model_id = "runwayml/stable-diffusion-v1-5"                                                                                                                                                                                                   
```                                                                                                                                                                                                                                           

Let's load the pipeline.                                                                                                                                                                                                                      

## Speed Optimization                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

``` python                                                                                                                                                                                                                                    
from diffusers import StableDiffusionPipeline                                                                                                                                                                                                 
                                                                                                                                                                                                                                              
pipe = StableDiffusionPipeline.from_pretrained(model_id)                                                                                                                                                                                      
```                                                                                                                                                                                                                                           
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
We aim at generating a beautiful photograph of an *old warrior chief* and will later try to find the best prompt to generate such a photograph. For now, let's keep the prompt simple:                                                                                                                                                                                                                                                                                                      
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
``` python                                                                                                                                                                                                                                    
prompt = "portrait photo of a old warrior chief"                                                                                                                                                                                                                                                                                                                                                                                                                                            
```                                                                                                                                                                                                                                           
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
To begin with, we should make sure we run inference on GPU, so let's move the pipeline to GPU, just like you would with any PyTorch module.                                                                                                   
                                                                                                                                                                                                                                              
``` python                                                                                                                                                                                                                                    
pipe = pipe.to("cuda")                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
```                                                                                                                                                                                                                                           
                                                                                                                                                                                                                                              
To generate an image, you should use the [~`StableDiffusionPipeline.__call__`] method.                                                                                                                                                        
                                                                                                                                                                                                                                              
To make sure we can reproduce more or less the same image in every call, let's make use of the generator. See the documentation on reproducibility [here](./conceptual/reproducibility) for more information.                                                                                                                                                                                                                                                                                   

``` python                                                                                                                                                                                                                                    
generator = torch.Generator("cuda").manual_seed(0)                                                                                                                                                                                            
```                                                                                                                                                                                                                                           
                                                                                                                                                                                                                                              
Now, let's take a spin on it.                                                                                                                                                                                                                 
                                                                                                                                                                                                                                              
``` python                                                                                                                                                                                                                                    
image = pipe(prompt, generator=generator).images[0]                                                                                                                                                                                           
image                                                                                                                                                                                                                                         
```                                                                                                                                                                                                                                           

![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_1.png)                                                                                                                                  
                                                                                                                                                                                                                                              
Cool, this now took roughly 30 seconds on a T4 GPU (you might see faster inference if your allocated GPU is better than a T4).                                                                                                              
                                                                                                                                                                                                                                              
The default run we did above used full float32 precision and ran the default number of inference steps (50). The easiest speed-ups come from switching to float16 (or half) precision and simply running fewer inference steps. Let's load the model now in float16 instead.                                                                                                                                                                                                                            
                                                                                                                                                                                                                                              
``` python                                                                                                                                                                                                                                    
import torch                                                                                                                                                                                                                                  

pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)                                                                                                                                                           
pipe = pipe.to("cuda")                                                                                                                                                                                                                        
```                                                                                                                                                                                                                                           

And we can again call the pipeline to generate an image.                                                                                                                                                                                      

``` python                                                                                                                                                                                                                                    
generator = torch.Generator("cuda").manual_seed(0)                                                                                                                                                                                            

image = pipe(prompt, generator=generator).images[0]                                                                                                                                                                                           
image                                                                                                                                                                                                                                         
```                                                                                                                                                                                                                                           
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_2.png)                                                                                                                                  

Cool, this is almost three times as fast for arguably the same image quality.                                                                                                                                                           
                                                                                                                                                                                                                                              
We strongly suggest always running your pipelines in float16 as so far we have very rarely seen degradations in quality because of it.                                                                                                         

Next, let's see if we need to use 50 inference steps or whether we could use significantly fewer. The number of inference steps is associated with the denoising scheduler we use. Choosing a more efficient scheduler could help us decrease the number of steps.

Let's have a look at all the schedulers the stable diffusion pipeline is compatible with.                                                                                                                                                        
                                                                                                                                                                                                                                              
``` python                                                                                                                                                                                                                                    
pipe.scheduler.compatibles                                                                                                                                                                                                                    
```                                                                                                                                                                                                                                           
                                                                                                                                                                                                                                              
```                                                                                                                                                                                                                                           
    [diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler,                                                                                                                                                       
     diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler,                                                                                                                                                                       
     diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler,                                                                                                                                                                     
     diffusers.schedulers.scheduling_pndm.PNDMScheduler,                                                                                                                                                                                      
     diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler,                                                                                                                                                                   
     diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler,                                                                                                                                                
     diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler,                                                                                                                                                         
     diffusers.schedulers.scheduling_ddpm.DDPMScheduler,                                                                                                                                                                                      
     diffusers.schedulers.scheduling_ddim.DDIMScheduler]                                                                                                                                                                                      
```                                                                                                                                                                                                                                           

Cool, that's a lot of schedulers.                                                                                                                                                                                                                                                                                                                                                                                                                                                           

🧨 Diffusers is constantly adding a bunch of novel schedulers/samplers that can be used with Stable Diffusion. For more information, we recommend taking a look at the official documentation [here](https://huggingface.co/docs/diffusers/main/en/api/schedulers/overview).                                                                                                                                                                                                              
                                                                                                                                                                                                                                              
Alright, right now Stable Diffusion is using the `PNDMScheduler` which usually requires around 50 inference steps. However, other schedulers such as `DPMSolverMultistepScheduler` or `DPMSolverSinglestepScheduler` seem to get away with just 20 to 25 inference steps. Let's try them out.                                                                                                                                                                                               
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
You can set a new scheduler by making use of the [from_config](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) function.                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
``` python                                                                                                                                                                                                                                    
from diffusers import DPMSolverMultistepScheduler                                                                                                                                                                                             
                                                                                                                                                                                                                                              
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)                                                                                                                                                               
```                                                                                                                                                                                                                                           
                                                                                                                                                                                                                                              
Now, let's try to reduce the number of inference steps to just 20.                                                                                                                                                                            

``` python                                                                                                                                                                                                                                    
generator = torch.Generator("cuda").manual_seed(0)                                                                                                                                                                                            
                                                                                                                                                                                                                                              
image = pipe(prompt, generator=generator, num_inference_steps=20).images[0]                                                                                                                                                                   
image                                                                                                                                                                                                                                         
```                                                                                                                                                                                                                                           
                                                                                                                                                                                                                                              
![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_3.png)                                                                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                              
The image now does look a little different, but it's arguably still of equally high quality. We now cut inference time to just 4 seconds though 😍. 

## Memory Optimization                                                                                                                                                                                                                        

Less memory used in generation indirectly implies more speed, since we're often trying to maximize how many images we can generate per second. Usually, the more images per inference run, the more images per second too.                                                                                                                                              

The easiest way to see how many images we can generate at once is to simply try it out, and see when we get a *"Out-of-memory (OOM)"* error.                                                                                                          

We can run batched inference by simply passing a list of prompts and generators. Let's define a quick function that generates a batch for us.                                                                                                

``` python                                                                                                                                                                                                                                    
def get_inputs(batch_size=1):                                                                                                                                                                                                                 
  generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)]                                                                                                                                                             
  prompts = batch_size * [prompt]                                                                                                                                                                                                             
  num_inference_steps = 20                                                                                                                                                                                                                    

  return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps}                                                                                                                                              
```                                                                                                                                                                                                                                           
This function returns a list of prompts and a list of generators, so we can reuse the generator that produced a result we like.

We also need a method that allows us to easily display a batch of images.                                                                                                                                                                     

``` python                                                                                                                                                                                                                                    
from PIL import Image                                                                                                                                                                                                                         

def image_grid(imgs, rows=2, cols=2):                                                                                                                                                                                                         
    w, h = imgs[0].size                                                                                                                                                                                                                       
    grid = Image.new('RGB', size=(cols*w, rows*h))                                                                                                                                                                                            
                                                                                                                                                                                                                                              
    for i, img in enumerate(imgs):                                                                                                                                                                                                            
        grid.paste(img, box=(i%cols*w, i//cols*h))                                                                                                                                                                                            
    return grid                                                                                                                                                                                                                               
```                                                                                                                                                                                                                                           

Cool, let's see how much memory we can use starting with `batch_size=4`.                                                                                                                                                                      

``` python                                                                                                                                                                                                                                    
images = pipe(**get_inputs(batch_size=4)).images                                                                                                                                                                                              
image_grid(images)                                                                                                                                                                                                                            
```                                                                                                                                                                                                                                           

![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_4.png)                                                                                                                                  

Going over a batch_size of 4 will error out in this notebook (assuming we are running it on a T4 GPU). Also, we can see we only generate slightly more images per second (3.75s/image) compared to 4s/image previously.                                                                                                                                                                                                                                                                                                                   

However, the community has found some nice tricks to improve the memory constraints further. After stable diffusion was released, the community found improvements within days and shared them freely over GitHub - open-source at its finest! I believe the original idea came from [this](https://github.com/basujindal/stable-diffusion/pull/117) GitHub thread.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    

By far most of the memory is taken up by the cross-attention layers. Instead of running this operation in batch, one can run it sequentially to save a significant amount of memory.                                                                                                                                                                                                                                                                                                         

It can easily be enabled by calling `enable_attention_slicing` as is documented [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline.enable_attention_slicing).                                                                                                                                                                                                                                                   

``` python                                                                                                                                                                                                                                    
pipe.enable_attention_slicing()                                                                                                                                                                                                               
```                                                                                                                                                                                                                                           

Great, now that attention slicing is enabled, let's try to double the batch size again, going for `batch_size=8`.                                                                                                                              

``` python                                                                                                                                                                                                                                    
images = pipe(**get_inputs(batch_size=8)).images                                                                                                                                                                                              
image_grid(images, rows=2, cols=4)                                                                                                                                                                                                            
```                                                                                                                                                                                                                                           

![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_5.png)                                                                                                                                  

Nice, it works. However, the speed gain is again not very big (it might however be much more significant on other GPUs).                                                                                                                      

We're at roughly 3.5 seconds per image 🔥 which is probably the fastest we can be with a simple T4 without sacrificing quality.                                                                                                               

Next, let's look into how to improve the quality!                                                                                                                                                                                             

## Quality Improvements                                                                                                                                                                                                                       

Now that our image generation pipeline is blazing fast, let's try to get maximum image quality.                                                                                                                                               

First of all, image quality is extremely subjective, so it's difficult to make general claims here.                                                                                                                                           

The most obvious step to take to improve quality is to use *better checkpoints*. Since the release of Stable Diffusion, many improved versions have been released, which are summarized here:                                                                                                                                                                                                                                                                                                

-   *Official Release - 22 Aug 2022*: [Stable-Diffusion 1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4)                                                                                                                            
-   *20 October 2022*: [Stable-Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)                                                                                                                                          
-   *24 Nov 2022*: [Stable-Diffusion 2.0](https://huggingface.co/stabilityai/stable-diffusion-2-0)                                                                                                                                            
-   *7 Dec 2022*: [Stable-Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)                                                                                                                                             

Newer versions don't necessarily mean better image quality with the same parameters. People mentioned that *2.0* is slightly worse than *1.5* for certain prompts, but given the right prompt engineering *2.0* and *2.1* seem to be better.                                                                                                                                                                                                                                                 

Overall, we strongly recommend just trying the models out and reading up on advice online (e.g. it has been shown that using negative prompts is very important for 2.0 and 2.1 to get the highest possible quality. See for example [this nice blog post](https://minimaxir.com/2022/11/stable-diffusion-negative-prompt/).                                                                                                                                                                   

Additionally, the community has started fine-tuning many of the above versions on certain styles with some of them having an extremely high quality and gaining a lot of traction.                                                                                                                                                                                                                                                                                                          

We recommend having a look at all [diffusers checkpoints sorted by downloads and trying out the different checkpoints](https://huggingface.co/models?library=diffusers).                                                                                                                                                                                                                                                                                                               

For the following, we will stick to v1.5 for simplicity.                                                                                                                                                                                      

Next, we can also try to optimize single components of the pipeline, e.g. switching out the latent decoder. For more details on how the whole Stable Diffusion pipeline works, please have a look at [this blog post](https://huggingface.co/blog/stable_diffusion).                                                                                                                                                                                                                        

Let's load [stabilityai's newest auto-decoder](https://huggingface.co/stabilityai/stable-diffusion-2-1).                                                                                                                                      

``` python                                                                                                                                                                                                                                    
from diffusers import AutoencoderKL                                                                                                                                                                                                           

vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda")                                                                                                                                        
```                                                                                                                                                                                                                                           

Now we can set it to the vae of the pipeline to use it.                                                                                                                                                                                       

``` python                                                                                                                                                                                                                                    
pipe.vae = vae                                                                                                                                                                                                                                
```                                                                                                                                                                                                                                           

Let's run the same prompt as before to compare quality.                                                                                                                                                                                       

``` python                                                                                                                                                                                                                                    
images = pipe(**get_inputs(batch_size=8)).images                                                                                                                                                                                              
image_grid(images, rows=2, cols=4)                                                                                                                                                                                                            
```                                                                                                                                                                                                                                           

![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_6.png)                                                                                                                                  

Seems like the difference is only very minor, but the new generations are arguably a bit *sharper*.                                                                                                                                           

Cool, finally, let's look a bit into prompt engineering.                                                                                                                                                                                      

Our goal was to generate a photo of an old warrior chief. Let's now try to bring a bit more color into the photos and make the look more impressive.                                                                                          

Originally our prompt was "*portrait photo of an old warrior chief*".                                                                                                                                                                          

To improve the prompt, it often helps to add cues that could have been used online to save high-quality photos, as well as add more details.                                                                                         
Essentially, when doing prompt engineering, one has to think:                                                                                                                                                                                 

-   How was the photo or similar photos of the one I want probably stored on the internet?                                                                                                                                                    
-   What additional detail can I give that steers the models into the style that I want?                                                                                                                                                      

Cool, let's add more details.                                                                                                                                                                                                                 

``` python                                                                                                                                                                                                                                    
prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes"                                                                                                                                                   
```                                                                                                                                                                                                                                           

and let's also add some cues that usually help to generate higher quality images.                                                                                                                                                             

``` python                                                                                                                                                                                                                                    
prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3  --beta --upbeta"                                                                                                                                         
prompt                                                                                                                                                                                                                                        
```                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Cool, let's now try this prompt.                                                                                                                                                                                                              

``` python                                                                                                                                                                                                                                    
images = pipe(**get_inputs(batch_size=8)).images                                                                                                                                                                                              
image_grid(images, rows=2, cols=4)                                                                                                                                                                                                            
```                                                                                                                                                                                                                                           

![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_7.png)                                                                                                                                  

Pretty impressive! We got some very high-quality image generations there. The 2nd image is my personal favorite, so I'll re-use this seed and see whether I can tweak the prompts slightly by using "oldest warrior", "old", "", and "young" instead of "old".                                                                                                                                                                                                                    

``` python                                                                                                                                                                                                                                    
prompts = [                                                                                                                                                                                                                                   
    "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3  --beta --upbeta",                                                                                                                                                                                                                                                                   
    "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3  --beta --upbeta",                                                                                                                                                                                                                                                                        
    "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3  --beta --upbeta",                                                                                                                                                                                                                                                                            
    "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3  --beta --upbeta",                                                                                                                                                                                                                                                                      
]                                                                                                                                                                                                                                             

generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))]  # 1 because we want the 2nd image                                                                                                                          

images = pipe(prompt=prompts, generator=generator, num_inference_steps=25).images                                                                                                                                                             
image_grid(images)                                                                                                                                                                                                                            
```                                                                                                                                                                                                                                           

![img](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_8.png)                                                                                                                                  

The first picture looks nice! The eye movement slightly changed and looks nice. This finished up our 101-guide on how to use Stable Diffusion 🤗.                                                                                              

For more information on optimization or other guides, I recommend taking a look at the following:                                                                                                                                            

-   [Blog post about Stable Diffusion](https://huggingface.co/blog/stable_diffusion): In-detail blog post explaining Stable Diffusion.                                                                                                        
-   [FlashAttention](https://huggingface.co/docs/diffusers/optimization/xformers): XFormers flash attention can optimize your model even further with more speed and memory improvements.                                                                                                                                                                                                                                                                                                   
-   [Dreambooth](https://huggingface.co/docs/diffusers/training/dreambooth) - Quickly customize the model by fine-tuning it.                                                                                                                  
-   [General info on Stable Diffusion](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/overview) - Info on other tasks that are powered by Stable Diffusion.