File size: 3,588 Bytes
c0f6c18
 
 
 
 
 
 
 
 
 
 
 
 
e39d576
 
c0f6c18
e39d576
c0f6c18
 
 
 
e45409a
c0f6c18
e07acca
 
 
 
 
 
 
 
 
 
 
 
c0f6c18
 
e45409a
 
c0f6c18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e07acca
c0f6c18
 
e39d576
 
c0f6c18
 
 
 
e39d576
c0f6c18
 
8ac7e9f
c0f6c18
 
 
e45409a
 
8ac7e9f
 
 
e45409a
8ac7e9f
e45409a
 
 
 
e07acca
e45409a
 
 
 
e07acca
 
 
05b3524
e07acca
 
 
 
 
 
05b3524
e45409a
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/LICENSE.md
base_model:
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
library_name: diffusers
tags:
- flux
- text-to-image
---

![Flux.1 Lite](./sample_images/flux1-lite-8B_sample.png)

# Flux.1 Lite

We want to announce the alpha version of our new distilled Flux.1 Lite model, an 8B parameter transformer model distilled from the original Flux1. dev model.

Our goal is to further reduce FLUX.1-dev transformer parameters up to 24Gb to make it compatible with most of GPU cards.

![Flux.1 Lite vs FLUX.1-dev](./sample_images/models_comparison.png)

## Motivation

As stated by other members of the community like [Ostris](https://ostris.com/2024/09/07/skipping-flux-1-dev-blocks/), it seems that each of the blocks of the Flux1.dev transformer is not contributing equally to the final image generation. To confirm this hypothesis, we can measure the MSE between the input and the output of each block. As we can see in the following images, the mse differs a lot between the different blocks.

![Flux.1 Lite generated image](./sample_images/skip_blocks/generated_img.png)
![MSE MMDIT](./sample_images/skip_blocks/mse_mmdit_img.png)
![MSE DIT](./sample_images/skip_blocks/mse_dit_img.png)

Furthermore, as displayed in the following image, only when you skip one of the first MMDIT blocks, the performance of the model is severely affected.
![Skip one MMDIT block](./sample_images/skip_blocks/skip_one_MMDIT_block.png)
![Skip one DIT block](./sample_images/skip_blocks/skip_one_DIT_block.png)

## Text-to-Image Usage

It is recommended to use a `guidance_scale` of 3.5 and a `n_steps` between 22 and 30 for best results.

```python
import torch
from diffusers import FluxPipeline

base_model_id = "Freepik/flux.1-lite-8B-alpha"
torch_dtype = torch.bfloat16
device = "cuda"

# Load the pipe
model_id = "Freepik/flux.1-lite-8B-alpha"
pipe = FluxPipeline.from_pretrained(
    model_id, torch_dtype=torch_dtype
).to(device)

# Inference
prompt = "A close-up image of a green alien with fluorescent skin in the middle of a dark purple forest"

guidance_scale = 3.5  # Important to keep guidance_scale to 3.5
n_steps = 28
seed = 11

with torch.inference_mode():
    image = pipe(
        prompt=prompt,
        generator=torch.Generator(device="cpu").manual_seed(seed),
        num_inference_steps=n_steps,
        guidance_scale=guidance_scale,
        height=1024,s
        width=1024,
    ).images[0]
image.save("output.png")
```

## ComfyUI
We also provided a ComfyUI workflow in `comfy/flux.1-lite_workflow.json`
![ComfyUI workflow](./comfy/flux.1-lite_workflow.png)

## Checkpoints
* `flux.1-lite-8B-alpha.safetensors`: Transformer checkpoint, in Flux original format.
* `transformers/`: Contains distilled 8B transformer model, in diffusers format.

## Try our Hugging Face demos: 
Flux.1 Lite demo host on [🤗 flux.1-lite](https://huggingface.co/spaces/Freepik/flux.1-lite)

## News🔥🔥🔥
* Oct.18, 2024. Alpha 8B checkpoint and comparison demo 🤗 (i.e. [Flux.1 Lite](https://huggingface.co/spaces/Freepik/flux.1-lite)) is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B-alpha).

## Citation
If you find our work helpful, please cite it!

```bibtex
@article{flux1-lite,
  title={Flux.1 Lite: Distilling Flux1.dev for Efficient Text-to-Image Generation},
  author={Daniel Verdú, Javier Martín},
  email={[email protected], [email protected]},
  year={2024},
}
```