Update news
#3
by
mi804
- opened
README.md
CHANGED
@@ -8,6 +8,9 @@ tasks:
|
|
8 |
- any-to-any
|
9 |
---
|
10 |
|
|
|
|
|
|
|
11 |
## What is the Nexus-Gen
|
12 |
[Nexus-Gen](https://huggingface.co/papers/2504.21356) is a unified model that synergizes the language reasoning capabilities of LLMs with the image synthesis power of diffusion models. To align the embedding space of the LLM and diffusion model, we conduct a dual-phase alignment training process. (1) The autoregressive LLM learns to predict image embeddings conditioned on multimodal inputs, while (2) the vision decoder is trained to reconstruct high-fidelity images from these embeddings. During training the LLM, we identified a critical discrepancy between the autoregressive paradigm's training and inference phases, where error accumulation in continuous embedding space severely degrades generation quality. To avoid this issue, we introduce a prefilled autoregression strategy that prefills input sequence with position-embedded special tokens instead of continuous embeddings. Through dual-phase training, Nexus-Gen has developed the integrated capability to comprehensively address the image understanding, generation and editing tasks as follows.
|
13 |
|
@@ -57,4 +60,27 @@ python image_editing.py
|
|
57 |
```
|
58 |
|
59 |
### Training Codes
|
60 |
-
Nexus-Gen is trained base on [ms-swift](https://github.com/modelscope/ms-swift.git) and [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio.git). You can find the training scripts in `train/scripts/train_decoder.sh` and `train_llm.sh`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
- any-to-any
|
9 |
---
|
10 |
|
11 |
+
## News
|
12 |
+
- **May 27, 2025**: We fine-tuned Nexus-Gen using the [BLIP-3o-60k](https://huggingface.co/datasets/BLIP3o/BLIP3o-60k) dataset, significantly improving the model's robustness to text prompts in image generation, **achieving a GenEval score of 0.79**. The [model checkpoints](https://www.modelscope.cn/models/DiffSynth-Studio/Nexus-Gen) have been updated.
|
13 |
+
|
14 |
## What is the Nexus-Gen
|
15 |
[Nexus-Gen](https://huggingface.co/papers/2504.21356) is a unified model that synergizes the language reasoning capabilities of LLMs with the image synthesis power of diffusion models. To align the embedding space of the LLM and diffusion model, we conduct a dual-phase alignment training process. (1) The autoregressive LLM learns to predict image embeddings conditioned on multimodal inputs, while (2) the vision decoder is trained to reconstruct high-fidelity images from these embeddings. During training the LLM, we identified a critical discrepancy between the autoregressive paradigm's training and inference phases, where error accumulation in continuous embedding space severely degrades generation quality. To avoid this issue, we introduce a prefilled autoregression strategy that prefills input sequence with position-embedded special tokens instead of continuous embeddings. Through dual-phase training, Nexus-Gen has developed the integrated capability to comprehensively address the image understanding, generation and editing tasks as follows.
|
16 |
|
|
|
60 |
```
|
61 |
|
62 |
### Training Codes
|
63 |
+
Nexus-Gen is trained base on [ms-swift](https://github.com/modelscope/ms-swift.git) and [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio.git). You can find the training scripts in `train/scripts/train_decoder.sh` and `train_llm.sh`.
|
64 |
+
|
65 |
+
|
66 |
+
## Limitations
|
67 |
+
|
68 |
+
- Please note that Nexus-Gen is trained primarily with English corpus, therefore instruction-following with non-English is not supported.
|
69 |
+
- Please note that Nexus-Gen was trained on limited text-to-image data and may not be robust to short prompts.
|
70 |
+
|
71 |
+
## Citation
|
72 |
+
|
73 |
+
Feel free to reference our work if you find it helpful.
|
74 |
+
|
75 |
+
```
|
76 |
+
@misc{zhang2025nexusgenunifiedmodelimage,
|
77 |
+
title={Nexus-Gen: A Unified Model for Image Understanding, Generation, and Editing},
|
78 |
+
author={Hong Zhang and Zhongjie Duan and Xingjun Wang and Yuze Zhao and Weiyi Lu and Zhipeng Di and Yixuan Xu and Yingda Chen and Yu Zhang},
|
79 |
+
year={2025},
|
80 |
+
eprint={2504.21356},
|
81 |
+
archivePrefix={arXiv},
|
82 |
+
primaryClass={cs.CV},
|
83 |
+
url={https://arxiv.org/abs/2504.21356v2},
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|