mhdang commited on
Commit
dc4675d
·
1 Parent(s): e41d91e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Diffusion Model Alignment Using Direct Preference Optimization
2
+
3
+
4
+ Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check our paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
5
+
6
+ This model is fine-tuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on offline human preference data [pickapic_v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2).
7
+
8
+ *Code and checkpoints for SDXL will come soon.*
9
+
10
+ ## A quick example
11
+ ```
12
+ from diffusers import StableDiffusionPipeline, UNet2DConditionModel
13
+ import torch
14
+ import random
15
+
16
+ # load pipeline
17
+ model_id = "runwayml/stable-diffusion-v1-5"
18
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
19
+
20
+ # load finetuned model
21
+ unet_id = "mhdang/dpo-sd1.5-text2image-v1"
22
+ unet = UNet2DConditionModel.from_pretrained(unet_id, subfolder="unet", torch_dtype=torch.float16)
23
+ pipe.unet = unet
24
+ pipe = pipe.to("cuda")
25
+
26
+ prompt = "Two cats playing chess on a tree branch"
27
+ image = pipe(prompt, guidance_scale=7.5).images[0].resize((512,512))
28
+
29
+ image.save("cats_playing_chess.png")
30
+ ```
31
+
32
+ More details coming soon.