File size: 1,270 Bytes
4e709b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31ce484
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4e709b9
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44

---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a picture of <s1><s2> minifigure
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
    
# LoRA DreamBooth - merve/lego-lora-trained-xl

These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a picture of <s1><s2> minifigure using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. 

![img_0](./image_0.png)
![img_1](./image_1.png)
![img_2](./image_2.png)
![img_3](./image_3.png)

You can use this code 👇 
```python
from huggingface_hub.repocard import RepoCard
from diffusers import DiffusionPipeline
import torch

lora_model_id = "merve/lego-lora-trained-xl"
card = RepoCard.load(lora_model_id)
base_model_id = card.data.to_dict()["base_model"]

pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.load_lora_weights(lora_model_id)

pipe("a picture of <s1><s2> minifigure as lana del rey, high quality", num_inference_steps=35).images[0]
```

LoRA for the text encoder was enabled: False.

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.