Update README.md
Browse files
README.md
CHANGED
@@ -11,25 +11,25 @@ library_name: diffusers
|
|
11 |
|
12 |
|
13 |
---
|
14 |
-
#
|
15 |
|
16 |
For detailed information, code, and documentation, please visit our GitHub repository:
|
17 |
-
[
|
18 |
|
19 |
-
##
|
20 |
|
21 |
-

|
26 |
|
27 |
-
**
|
28 |
|
29 |
## Model Checkpoints
|
30 |
|
31 |
-
- [
|
32 |
-
- [SimPO-LoRA (Diffusion model with **Simple Preference Optimization**)](https://huggingface.co/
|
33 |
|
34 |
### Using with Diffusersπ§¨
|
35 |
|
@@ -43,11 +43,11 @@ from diffusers import StableDiffusionInpaintPipeline
|
|
43 |
|
44 |
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
45 |
sd_pipe = StableDiffusionInpaintPipeline.from_pretrained(
|
46 |
-
"
|
47 |
torch_dtype=torch.float16,
|
48 |
safety_checker=None,
|
49 |
).to(device)
|
50 |
-
sd_pipe.load_lora_weights("
|
51 |
sd_pipe.set_adapters(["SimPO"], adapter_weights=[0.5])
|
52 |
|
53 |
def generate_image(image_path, mask_path, prompt, negative_prompt, pipe, seed):
|
@@ -62,8 +62,8 @@ def generate_image(image_path, mask_path, prompt, negative_prompt, pipe, seed):
|
|
62 |
negative_prompt=negative_prompt, generator=generator)
|
63 |
return result.images[0]
|
64 |
|
65 |
-
image = '/content/
|
66 |
-
mask = "/content/
|
67 |
prompt = "he is an asian man."
|
68 |
seed = 38189219984105
|
69 |
negative_prompt = "low resolution, ugly, disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, deformed eyes, low quailty, noise"
|
@@ -78,8 +78,8 @@ generated_image
|
|
78 |
```
|
79 |

|
80 |
|
81 |
-
For more detailed usage instructions, including how to prepare segmentation masks and run inference, please refer to our [GitHub repository](https://github.com/
|
82 |
|
83 |
## Training
|
84 |
|
85 |
-
For information on how to train the model, including the use of **βπΉπΉπΈ** (Focused Feature Enhancement Loss) and **βπ«π°ππ** (Difference Loss), please see our GitHub repository's [training section](https://github.com/
|
|
|
11 |
|
12 |
|
13 |
---
|
14 |
+
# Ano-Face-Fair: Race-Fair Face Anonymization in Text-to-Image Synthesis using Simple Preference Optimization in Diffusion Model
|
15 |
|
16 |
For detailed information, code, and documentation, please visit our GitHub repository:
|
17 |
+
[Ano-Face-Fair](https://github.com/i3n7g3/Ano-Face-Fair)
|
18 |
|
19 |
+
## Ano-Face-Fair
|
20 |
|
21 |
+

|
22 |
|
23 |
## Model
|
24 |
|
25 |

|
26 |
|
27 |
+
**Ano-Face-Fair** presents a novel approach to text-to-face synthesis using a Diffusion Model that considers Race Fairness. Our method uses facial segmentation masks to edit specific facial regions, and employs a Stable Diffusion v2 Inpainting model trained on a curated Asian dataset. We introduce two key losses: **βπΉπΉπΈ** (Focused Feature Enhancement Loss) to enhance performance with limited data, and **βπ«π°ππ** (Difference Loss) to address catastrophic forgetting. Finally, we apply **Simple Preference Optimization** (SimPO) for refined and enhanced image generation.
|
28 |
|
29 |
## Model Checkpoints
|
30 |
|
31 |
+
- [Ano-Face-Fair (Inpainting model with **βπΉπΉπΈ** and **βπ«π°ππ**)](https://huggingface.co/i3n7g3/Ano-Face-Fair)
|
32 |
+
- [SimPO-LoRA (Diffusion model with **Simple Preference Optimization**)](https://huggingface.co/i3n7g3/SimPO-LoRA-Diffusion)
|
33 |
|
34 |
### Using with Diffusersπ§¨
|
35 |
|
|
|
43 |
|
44 |
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
45 |
sd_pipe = StableDiffusionInpaintPipeline.from_pretrained(
|
46 |
+
"i3n7g3/Ano-Face-Fair",
|
47 |
torch_dtype=torch.float16,
|
48 |
safety_checker=None,
|
49 |
).to(device)
|
50 |
+
sd_pipe.load_lora_weights("i3n7g3/SimPO-LoRA-Diffusion", adapter_name="SimPO")
|
51 |
sd_pipe.set_adapters(["SimPO"], adapter_weights=[0.5])
|
52 |
|
53 |
def generate_image(image_path, mask_path, prompt, negative_prompt, pipe, seed):
|
|
|
62 |
negative_prompt=negative_prompt, generator=generator)
|
63 |
return result.images[0]
|
64 |
|
65 |
+
image = '/content/Ano-Face-Fair/data/2.png'
|
66 |
+
mask = "/content/Ano-Face-Fair/data/2_mask.png"
|
67 |
prompt = "he is an asian man."
|
68 |
seed = 38189219984105
|
69 |
negative_prompt = "low resolution, ugly, disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, deformed eyes, low quailty, noise"
|
|
|
78 |
```
|
79 |

|
80 |
|
81 |
+
For more detailed usage instructions, including how to prepare segmentation masks and run inference, please refer to our [GitHub repository](https://github.com/i3n7g3/Ano-Face-Fair).
|
82 |
|
83 |
## Training
|
84 |
|
85 |
+
For information on how to train the model, including the use of **βπΉπΉπΈ** (Focused Feature Enhancement Loss) and **βπ«π°ππ** (Difference Loss), please see our GitHub repository's [training section](https://github.com/i3n7g3/Ano-Face-Fair#running_man-train).
|