Update README.md
Browse files
README.md
CHANGED
@@ -11,17 +11,34 @@ This repository contains a fused version of FLAN-T5-XXL, combining the split fil
|
|
11 |
|
12 |
---
|
13 |
|
14 |
-
##
|
15 |
-
|
16 |
-
- `flan_t5_xxl_fp16.safetensors`: Half-precision FP16 model for memory-efficient inference.
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
<img src="./Flan-T5xxl-FP32_FP16_compare.png" alt="Flan-T5xxl-FP32_FP16_compare">
|
20 |
</div>
|
21 |
|
22 |
---
|
23 |
|
24 |
-
## Comparison: FLAN-T5-XXL vs T5-XXL v1.1
|
25 |
|
26 |
<div style="display: flex; justify-content: center; align-items: center;">
|
27 |
<div style="text-align: center; margin-right: 1em;">
|
@@ -40,7 +57,7 @@ FLAN-T5-XXL provides more accurate responses to prompts.
|
|
40 |
|
41 |
---
|
42 |
|
43 |
-
|
44 |
|
45 |
- [FLAN-T5-XXL vs T5-XXL v1.1](https://ai-image-journey.blogspot.com/2024/12/clip-t5xxl-text-encoder.html)
|
46 |
- [FLAN-T5-XXL FP32 vs FP16 and other quantization](https://ai-image-journey.blogspot.com/2024/12/image-difference-t5xxl-clip-l.html)
|
|
|
11 |
|
12 |
---
|
13 |
|
14 |
+
## Newly Added: TE-Only Models for Stable Diffusion WebUI Forge & ComfyUI (2025-03-04)
|
15 |
+
Two additional files provide the **Text Encoder (TE) only** portion of FLAN-T5-XXL, specifically extracted for use with Stable Diffusion WebUI Forge and ComfyUI.
|
|
|
16 |
|
17 |
+
- `flan_t5_xxl_TE-only_FP32.safetensors`: Full-precision FP32 TE-only model.
|
18 |
+
- `flan_t5_xxl_TE-only_FP16.safetensors`: Half-precision FP16 TE-only model for memory-efficient inference.
|
19 |
+
|
20 |
+
These models retain only the text encoding functionality of FLAN-T5-XXL, reducing resource consumption while maintaining high-quality prompt processing in AI image generation workflows.
|
21 |
+
|
22 |
+
Also can be used as drop-in replacements for standard text encoders in Stable Diffusion-based workflows.
|
23 |
+
|
24 |
+
|
25 |
+
---
|
26 |
+
|
27 |
+
## Full Models
|
28 |
+
- `flan_t5_xxl_fp32.safetensors`: Full-precision FP32 Full model.
|
29 |
+
- `flan_t5_xxl_fp16.safetensors`: Half-precision FP16 Full model for memory-efficient inference.
|
30 |
+
|
31 |
+
---
|
32 |
+
|
33 |
+
## Comparison: FLAN-T5-XXL-FP32 vs FLAN-T5-XXL-FP16 on Flux.1[dev] (base model: [blue_pencil-flux1_v0.0.1](https://huggingface.co/bluepen5805/blue_pencil-flux1))
|
34 |
+
|
35 |
+
<div style="text-align: center; margin-left: auto; margin-right: auto;width:600px;max-width:80%;">
|
36 |
<img src="./Flan-T5xxl-FP32_FP16_compare.png" alt="Flan-T5xxl-FP32_FP16_compare">
|
37 |
</div>
|
38 |
|
39 |
---
|
40 |
|
41 |
+
## Comparison: FLAN-T5-XXL vs T5-XXL v1.1
|
42 |
|
43 |
<div style="display: flex; justify-content: center; align-items: center;">
|
44 |
<div style="text-align: center; margin-right: 1em;">
|
|
|
57 |
|
58 |
---
|
59 |
|
60 |
+
## Further Comparison
|
61 |
|
62 |
- [FLAN-T5-XXL vs T5-XXL v1.1](https://ai-image-journey.blogspot.com/2024/12/clip-t5xxl-text-encoder.html)
|
63 |
- [FLAN-T5-XXL FP32 vs FP16 and other quantization](https://ai-image-journey.blogspot.com/2024/12/image-difference-t5xxl-clip-l.html)
|