Update README.md
Browse files
README.md
CHANGED
@@ -6,17 +6,45 @@ pipeline_tag: text-to-image
|
|
6 |
---
|
7 |
# ImageReward
|
8 |
|
9 |
-
|
|
|
|
|
10 |
|
11 |
-
|
12 |
|
13 |
-
|
|
|
|
|
|
|
14 |
|
15 |
-
|
|
|
|
|
16 |
|
17 |
-
|
18 |
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
```python
|
22 |
import os
|
@@ -28,7 +56,7 @@ if __name__ == "__main__":
|
|
28 |
img_prefix = "assets/images"
|
29 |
generations = [f"{pic_id}.webp" for pic_id in range(1, 5)]
|
30 |
img_list = [os.path.join(img_prefix, img) for img in generations]
|
31 |
-
model = reward.load()
|
32 |
with torch.no_grad():
|
33 |
ranking, rewards = model.inference_rank(prompt, img_list)
|
34 |
# Print the result
|
@@ -41,7 +69,7 @@ if __name__ == "__main__":
|
|
41 |
|
42 |
```
|
43 |
|
44 |
-
The output
|
45 |
|
46 |
```
|
47 |
Preference predictions:
|
|
|
6 |
---
|
7 |
# ImageReward
|
8 |
|
9 |
+
<p align="center">
|
10 |
+
🤗 <a href="https://huggingface.co/THUDM/ImageReward" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.05977" target="_blank">Paper</a> <br>
|
11 |
+
</p>
|
12 |
|
13 |
+
**ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation**
|
14 |
|
15 |
+
ImageReward is the first general-purpose text-to-image human preference RM which is trained on in total 137k pairs of
|
16 |
+
expert comparisons, based on text prompts and corresponding model outputs from DiffusionDB. We demonstrate that
|
17 |
+
ImageReward outperforms existing text-image scoring methods, such as CLIP, Aesthetic, and BLIP, in terms of
|
18 |
+
understanding human preference in text-to-image synthesis through extensive analysis and experiments.
|
19 |
|
20 |
+
<p align="center">
|
21 |
+
<img src="figures/ImageReward.png" width="700px">
|
22 |
+
</p>
|
23 |
|
24 |
+
## Quick Start
|
25 |
|
26 |
+
### Install Dependency
|
27 |
+
|
28 |
+
We have integrated the whole repository to a single python package `image-reward`. Following the commands below to prepare the environment:
|
29 |
+
|
30 |
+
```shell
|
31 |
+
# Clone the ImageReward repository (containing data for testing)
|
32 |
+
git clone https://github.com/THUDM/ImageReward.git
|
33 |
+
cd ImageReward
|
34 |
+
|
35 |
+
# Install the integrated package `image-reward`
|
36 |
+
pip install image-reward
|
37 |
+
```
|
38 |
+
|
39 |
+
### Example Use
|
40 |
+
|
41 |
+
We provide example images in the [`assets/images`](assets/images) directory of this repo. The example prompt is:
|
42 |
+
|
43 |
+
```text
|
44 |
+
a painting of an ocean with clouds and birds, day time, low depth field effect
|
45 |
+
```
|
46 |
+
|
47 |
+
Use the following code to get the human preference scores from ImageReward:
|
48 |
|
49 |
```python
|
50 |
import os
|
|
|
56 |
img_prefix = "assets/images"
|
57 |
generations = [f"{pic_id}.webp" for pic_id in range(1, 5)]
|
58 |
img_list = [os.path.join(img_prefix, img) for img in generations]
|
59 |
+
model = reward.load("ImageReward-v1.0")
|
60 |
with torch.no_grad():
|
61 |
ranking, rewards = model.inference_rank(prompt, img_list)
|
62 |
# Print the result
|
|
|
69 |
|
70 |
```
|
71 |
|
72 |
+
The output should be like as follow (the exact numbers may be slightly different depending on the compute device):
|
73 |
|
74 |
```
|
75 |
Preference predictions:
|