Delete abstract/2309.15807.txt
Browse files- abstract/2309.15807.txt +0 -5
abstract/2309.15807.txt
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
"To make the following text readable for a text-to-speech system and remove LaTeX characters, please preprocess it accordingly. If there is a URL, replace it with the phrase 'project's website'. In terms of mathematical notations, replace LaTeX with something that can be read by a text-to-speech system without altering the meaning.
|
2 |
-
|
3 |
-
'Training text-to-image models with web scale image-text pairs enables the generation of a wide range of visual concepts from text. However, these pre-trained models often face challenges when it comes to generating highly aesthetic images. This creates the need for aesthetic alignment post pre-training. In this paper, we propose quality-tuning to effectively guide a pre-trained model to exclusively generate highly visually appealing images, while maintaining generality across visual concepts. Our key insight is that supervised fine-tuning with a set of surprisingly small but extremely visually appealing images can significantly improve the generation quality. We pre-train a latent diffusion model on 1.1 billion image-text pairs and fine-tune it with only a few thousand carefully selected high-quality images. The resulting model, Emu, achieves a win rate of 82.9% compared with its pre-trained only counterpart. Compared to the state-of-the-art SDXLv1.0, Emu is preferred 68.4% and 71.3% of the time on visual appeal on the standard PartiPrompts and our Open User Input benchmark based on the real-world usage of text-to-image models. In addition, we show that quality-tuning is a generic approach that is also effective for other architectures, including pixel diffusion and masked generative transformer models.'"
|
4 |
-
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|