File size: 3,736 Bytes
ea8245d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cfd4785
ea8245d
 
 
 
 
 
 
 
 
 
 
 
 
cfd4785
ea8245d
 
 
 
 
 
 
 
 
cfd4785
ea8245d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7568845
 
ea8245d
 
 
cfd4785
ea8245d
 
 
 
 
 
 
 
 
 
64866cf
ea8245d
 
 
 
 
 
 
 
cfd4785
ea8245d
cfd4785
ea8245d
 
 
 
 
 
cfd4785
ea8245d
cfd4785
ea8245d
 
 
 
 
 
cfd4785
ea8245d
cfd4785
ea8245d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cfd4785
ea8245d
 
 
cfd4785
 
 
 
ea8245d
 
 
 
 
 
cfd4785
 
ea8245d
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
license: other
base_model: "black-forest-labs/FLUX.1-dev"
tags:
  - flux
  - flux-diffusers
  - text-to-image
  - diffusers
  - simpletuner
  - safe-for-work
  - lora
  - template:sd-lora
  - standard
inference: true
widget:
- text: 'unconditional (blank prompt)'
  parameters:
    negative_prompt: 'blurry, cropped, ugly'
  output:
    url: ./assets/image_0_0.png
- text: 'a women laughing with short hair'
  parameters:
    negative_prompt: 'blurry, cropped, ugly'
  output:
    url: ./assets/image_1_0.png
---

# deephouse-st-2911

This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).


The main validation prompt used during training was:
```
a women laughing with short hair
```


## Validation settings
- CFG: `3.0`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `FlowMatchEulerDiscreteScheduler`
- Seed: `42`
- Resolution: `1024x1024`
- Skip-layer guidance: 

Note: The validation settings are not necessarily the same as the [training settings](#training-settings).

You can find some example images in the following gallery:


<Gallery />

The text encoder **was not** trained.
You may reuse the base model text encoder for inference.


## Training settings

- Training epochs: 4
- Training steps: 1500
- Learning rate: 0.0004
  - Learning rate schedule: polynomial
  - Warmup steps: 100
- Max grad norm: 2.0
- Effective batch size: 1
  - Micro-batch size: 1
  - Gradient accumulation steps: 1
  - Number of GPUs: 1
- Gradient checkpointing: True
- Prediction type: flow-matching (extra parameters=['shift=3', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flow_matching_loss=compatible', 'flux_lora_target=all'])
- Optimizer: adamw_bf16
- Trainable parameter precision: Pure BF16
- Caption dropout probability: 10.0%


- LoRA Rank: 16
- LoRA Alpha: 16.0
- LoRA Dropout: 0.1
- LoRA initialisation style: default
    

## Datasets

### nobel-512
- Repeats: 10
- Total number of images: 11
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### nobel-768
- Repeats: 10
- Total number of images: 11
- Total number of aspect buckets: 1
- Resolution: 0.589824 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### nobel-1024
- Repeats: 10
- Total number of images: 11
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No


## Inference


```python
import torch
from diffusers import DiffusionPipeline

model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'linhqyy/deephouse-st-2911'
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
pipeline.load_lora_weights(adapter_id)

prompt = "a women laughing with short hair"


## Optional: quantise the model to save on vram.
## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
#from optimum.quanto import quantize, freeze, qint8
#quantize(pipeline.transformer, weights=qint8)
#freeze(pipeline.transformer)
    
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
image = pipeline(
    prompt=prompt,
    num_inference_steps=20,
    generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
    width=1024,
    height=1024,
    guidance_scale=3.0,
).images[0]
image.save("output.png", format="PNG")
```