Update README.md
Browse files
README.md
CHANGED
@@ -8,8 +8,6 @@ pipeline_tag: text-to-image
|
|
8 |
---
|
9 |
# Diffusion Model Alignment Using Direct Preference Optimization
|
10 |
|
11 |
-
![row01](01.gif)
|
12 |
-
|
13 |
Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check our paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
|
14 |
|
15 |
This model is fine-tuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on offline human preference data [pickapic_v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2).
|
|
|
8 |
---
|
9 |
# Diffusion Model Alignment Using Direct Preference Optimization
|
10 |
|
|
|
|
|
11 |
Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check our paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
|
12 |
|
13 |
This model is fine-tuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on offline human preference data [pickapic_v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2).
|