End of training
Browse files- README.md +9 -5
- pytorch_lora_weights.safetensors +3 -0
README.md
CHANGED
@@ -15,19 +15,15 @@ datasets:
|
|
15 |
|
16 |
# LoRA DreamBooth - herve76/bb
|
17 |
|
18 |
-
## MODEL IS CURRENTLY TRAINING ...
|
19 |
-
Last checkpoint saved: checkpoint-500
|
20 |
-
|
21 |
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
|
22 |
|
23 |
The weights were trained on the concept prompt:
|
24 |
```
|
25 |
bbhf
|
26 |
-
```
|
27 |
Use this keyword to trigger your custom model in your prompts.
|
28 |
|
29 |
LoRA for the text encoder was enabled: False.
|
30 |
-
|
31 |
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
32 |
|
33 |
## Usage
|
@@ -36,24 +32,32 @@ Make sure to upgrade diffusers to >= 0.19.0:
|
|
36 |
```
|
37 |
pip install diffusers --upgrade
|
38 |
```
|
|
|
39 |
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
|
40 |
```
|
41 |
pip install invisible_watermark transformers accelerate safetensors
|
42 |
```
|
|
|
43 |
To just use the base model, you can run:
|
|
|
44 |
```python
|
45 |
import torch
|
46 |
from diffusers import DiffusionPipeline, AutoencoderKL
|
|
|
47 |
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
|
|
|
48 |
pipe = DiffusionPipeline.from_pretrained(
|
49 |
"stabilityai/stable-diffusion-xl-base-1.0",
|
50 |
vae=vae, torch_dtype=torch.float16, variant="fp16",
|
51 |
use_safetensors=True
|
52 |
)
|
|
|
53 |
pipe.to("cuda")
|
|
|
54 |
# This is where you load your trained weights
|
55 |
pipe.load_lora_weights('herve76/bb')
|
56 |
|
57 |
prompt = "A majestic bbhf jumping from a big stone at night"
|
|
|
58 |
image = pipe(prompt=prompt, num_inference_steps=50).images[0]
|
59 |
```
|
|
|
15 |
|
16 |
# LoRA DreamBooth - herve76/bb
|
17 |
|
|
|
|
|
|
|
18 |
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
|
19 |
|
20 |
The weights were trained on the concept prompt:
|
21 |
```
|
22 |
bbhf
|
23 |
+
```
|
24 |
Use this keyword to trigger your custom model in your prompts.
|
25 |
|
26 |
LoRA for the text encoder was enabled: False.
|
|
|
27 |
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
28 |
|
29 |
## Usage
|
|
|
32 |
```
|
33 |
pip install diffusers --upgrade
|
34 |
```
|
35 |
+
|
36 |
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
|
37 |
```
|
38 |
pip install invisible_watermark transformers accelerate safetensors
|
39 |
```
|
40 |
+
|
41 |
To just use the base model, you can run:
|
42 |
+
|
43 |
```python
|
44 |
import torch
|
45 |
from diffusers import DiffusionPipeline, AutoencoderKL
|
46 |
+
|
47 |
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
|
48 |
+
|
49 |
pipe = DiffusionPipeline.from_pretrained(
|
50 |
"stabilityai/stable-diffusion-xl-base-1.0",
|
51 |
vae=vae, torch_dtype=torch.float16, variant="fp16",
|
52 |
use_safetensors=True
|
53 |
)
|
54 |
+
|
55 |
pipe.to("cuda")
|
56 |
+
|
57 |
# This is where you load your trained weights
|
58 |
pipe.load_lora_weights('herve76/bb')
|
59 |
|
60 |
prompt = "A majestic bbhf jumping from a big stone at night"
|
61 |
+
|
62 |
image = pipe(prompt=prompt, num_inference_steps=50).images[0]
|
63 |
```
|
pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8e5749fa827cd375227d8b05391b2a6ac765bbd54a4fa83b444ec8869ccec999
|
3 |
+
size 23401064
|