mhdang commited on
Commit
3135332
·
1 Parent(s): 7af8f35

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -9,8 +9,12 @@ Direct Preference Optimization (DPO) for text-to-image diffusion models is a met
9
 
10
  This model is fine-tuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on offline human preference data [pickapic_v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2).
11
 
 
 
 
12
  ## SDXL
13
- *Code and checkpoints for SDXL will come soon!!!*
 
14
 
15
  ## A quick example
16
  ```python
 
9
 
10
  This model is fine-tuned from [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on offline human preference data [pickapic_v2](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2).
11
 
12
+ ## Code
13
+ *Code will come soon!!!*
14
+
15
  ## SDXL
16
+ We also have a model finedtuned from [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) available at [dpo-sdxl-text2image-v1](https://huggingface.co/mhdang/dpo-sdxl-text2image-v1).
17
+
18
 
19
  ## A quick example
20
  ```python