Update README.md
Browse files
README.md
CHANGED
@@ -11,27 +11,26 @@ It can be used where download bandwith, memory or diskspace are relatively low,
|
|
11 |
To use in a diffusers script you currently(15/04/2024) need to use a Source distribution of Diffusers
|
12 |
and an extra 'patch' from the PixArt-Alpha's teams Sigma Github repo
|
13 |
|
14 |
-
|
|
|
|
|
15 |
|
16 |
-
|
17 |
-
|
|
|
18 |
|
19 |
-
a Diffusers script looks like this.
|
20 |
|
21 |
```py
|
22 |
import random
|
23 |
import sys
|
24 |
import torch
|
25 |
-
from diffusers
|
26 |
-
from scripts.diffusers_patches import pixart_sigma_init_patched_inputs, PixArtSigmaPipeline
|
27 |
|
28 |
-
assert getattr(Transformer2DModel, '_init_patched_inputs', False), "Need to Upgrade diffusers: pip install git+https://github.com/huggingface/diffusers"
|
29 |
-
setattr(Transformer2DModel, '_init_patched_inputs', pixart_sigma_init_patched_inputs)
|
30 |
device = 'mps'
|
31 |
weight_dtype = torch.bfloat16
|
32 |
|
33 |
pipe = PixArtSigmaPipeline.from_pretrained(
|
34 |
-
"
|
35 |
torch_dtype=weight_dtype,
|
36 |
variant="fp16",
|
37 |
use_safetensors=True,
|
|
|
11 |
To use in a diffusers script you currently(15/04/2024) need to use a Source distribution of Diffusers
|
12 |
and an extra 'patch' from the PixArt-Alpha's teams Sigma Github repo
|
13 |
|
14 |
+
**NOTE: This Model has been converted but not successfully tested, during the memory effecient attention
|
15 |
+
it generates 16Gb buffer, this appears to be a MPS limitation, but it may also mean if requires more than 16Gb even
|
16 |
+
with the 16 bit model**
|
17 |
|
18 |
+
The diffusers script below assumes those with more memory on none MPS GPU's have more luck running it!
|
19 |
+
|
20 |
+
a Diffusers script looks like this, **currently (25th April 2024) you need will to install diffusers from source**.
|
21 |
|
|
|
22 |
|
23 |
```py
|
24 |
import random
|
25 |
import sys
|
26 |
import torch
|
27 |
+
from diffusers from PixArtSigmaPipeline
|
|
|
28 |
|
|
|
|
|
29 |
device = 'mps'
|
30 |
weight_dtype = torch.bfloat16
|
31 |
|
32 |
pipe = PixArtSigmaPipeline.from_pretrained(
|
33 |
+
"Vargol/PixArt-Sigma_2k_16bit,
|
34 |
torch_dtype=weight_dtype,
|
35 |
variant="fp16",
|
36 |
use_safetensors=True,
|