i3n7g3 commited on
Commit
d93f5ef
Β·
verified Β·
1 Parent(s): 0a20b61

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -11,25 +11,25 @@ library_name: diffusers
11
 
12
 
13
  ---
14
- # Anonymize Anyone: Toward Race Fairness in Text-to-Face Synthesis using Simple Preference Optimization in Diffusion Model
15
 
16
  For detailed information, code, and documentation, please visit our GitHub repository:
17
- [Anonymize-Anyone](https://github.com/fh2c1/Anonymize-Anyone)
18
 
19
- ## Anonymize Anyone
20
 
21
- ![anonymiza-anyone demo images](./assets/Fig1.png)
22
 
23
  ## Model
24
 
25
  ![overall_structure](./assets/Fig2.png)
26
 
27
- **Anonymize Anyone** presents a novel approach to text-to-face synthesis using a Diffusion Model that considers Race Fairness. Our method uses facial segmentation masks to edit specific facial regions, and employs a Stable Diffusion v2 Inpainting model trained on a curated Asian dataset. We introduce two key losses: **ℒ𝐹𝐹𝐸** (Focused Feature Enhancement Loss) to enhance performance with limited data, and **ℒ𝑫𝑰𝑭𝑭** (Difference Loss) to address catastrophic forgetting. Finally, we apply **Simple Preference Optimization** (SimPO) for refined and enhanced image generation.
28
 
29
  ## Model Checkpoints
30
 
31
- - [Anonymize-Anyone (Inpainting model with **ℒ𝐹𝐹𝐸** and **ℒ𝑫𝑰𝑭𝑭**)](https://huggingface.co/fh2c1/Anonymize-Anyone)
32
- - [SimPO-LoRA (Diffusion model with **Simple Preference Optimization**)](https://huggingface.co/fh2c1/SimPO-LoRA-1.2)
33
 
34
  ### Using with Diffusers🧨
35
 
@@ -43,11 +43,11 @@ from diffusers import StableDiffusionInpaintPipeline
43
 
44
  device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
45
  sd_pipe = StableDiffusionInpaintPipeline.from_pretrained(
46
- "fh2c1/Anonymize-Anyone",
47
  torch_dtype=torch.float16,
48
  safety_checker=None,
49
  ).to(device)
50
- sd_pipe.load_lora_weights("fh2c1/SimPO-LoRA-1.2", adapter_name="SimPO")
51
  sd_pipe.set_adapters(["SimPO"], adapter_weights=[0.5])
52
 
53
  def generate_image(image_path, mask_path, prompt, negative_prompt, pipe, seed):
@@ -62,8 +62,8 @@ def generate_image(image_path, mask_path, prompt, negative_prompt, pipe, seed):
62
  negative_prompt=negative_prompt, generator=generator)
63
  return result.images[0]
64
 
65
- image = '/content/Anonymize-Anyone/data/2.png'
66
- mask = "/content/Anonymize-Anyone/data/2_mask.png"
67
  prompt = "he is an asian man."
68
  seed = 38189219984105
69
  negative_prompt = "low resolution, ugly, disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, deformed eyes, low quailty, noise"
@@ -78,8 +78,8 @@ generated_image
78
  ```
79
  ![result](./assets/Fig3.png)
80
 
81
- For more detailed usage instructions, including how to prepare segmentation masks and run inference, please refer to our [GitHub repository](https://github.com/fh2c1/Anonymize-Anyone).
82
 
83
  ## Training
84
 
85
- For information on how to train the model, including the use of **ℒ𝐹𝐹𝐸** (Focused Feature Enhancement Loss) and **ℒ𝑫𝑰𝑭𝑭** (Difference Loss), please see our GitHub repository's [training section](https://github.com/fh2c1/Anonymize-Anyone#running_man-train).
 
11
 
12
 
13
  ---
14
+ # Ano-Face-Fair: Race-Fair Face Anonymization in Text-to-Image Synthesis using Simple Preference Optimization in Diffusion Model
15
 
16
  For detailed information, code, and documentation, please visit our GitHub repository:
17
+ [Ano-Face-Fair](https://github.com/i3n7g3/Ano-Face-Fair)
18
 
19
+ ## Ano-Face-Fair
20
 
21
+ ![Ano-Face-Fair demo images](./assets/Fig1.png)
22
 
23
  ## Model
24
 
25
  ![overall_structure](./assets/Fig2.png)
26
 
27
+ **Ano-Face-Fair** presents a novel approach to text-to-face synthesis using a Diffusion Model that considers Race Fairness. Our method uses facial segmentation masks to edit specific facial regions, and employs a Stable Diffusion v2 Inpainting model trained on a curated Asian dataset. We introduce two key losses: **ℒ𝐹𝐹𝐸** (Focused Feature Enhancement Loss) to enhance performance with limited data, and **ℒ𝑫𝑰𝑭𝑭** (Difference Loss) to address catastrophic forgetting. Finally, we apply **Simple Preference Optimization** (SimPO) for refined and enhanced image generation.
28
 
29
  ## Model Checkpoints
30
 
31
+ - [Ano-Face-Fair (Inpainting model with **ℒ𝐹𝐹𝐸** and **ℒ𝑫𝑰𝑭𝑭**)](https://huggingface.co/i3n7g3/Ano-Face-Fair)
32
+ - [SimPO-LoRA (Diffusion model with **Simple Preference Optimization**)](https://huggingface.co/i3n7g3/SimPO-LoRA-Diffusion)
33
 
34
  ### Using with Diffusers🧨
35
 
 
43
 
44
  device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
45
  sd_pipe = StableDiffusionInpaintPipeline.from_pretrained(
46
+ "i3n7g3/Ano-Face-Fair",
47
  torch_dtype=torch.float16,
48
  safety_checker=None,
49
  ).to(device)
50
+ sd_pipe.load_lora_weights("i3n7g3/SimPO-LoRA-Diffusion", adapter_name="SimPO")
51
  sd_pipe.set_adapters(["SimPO"], adapter_weights=[0.5])
52
 
53
  def generate_image(image_path, mask_path, prompt, negative_prompt, pipe, seed):
 
62
  negative_prompt=negative_prompt, generator=generator)
63
  return result.images[0]
64
 
65
+ image = '/content/Ano-Face-Fair/data/2.png'
66
+ mask = "/content/Ano-Face-Fair/data/2_mask.png"
67
  prompt = "he is an asian man."
68
  seed = 38189219984105
69
  negative_prompt = "low resolution, ugly, disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, deformed eyes, low quailty, noise"
 
78
  ```
79
  ![result](./assets/Fig3.png)
80
 
81
+ For more detailed usage instructions, including how to prepare segmentation masks and run inference, please refer to our [GitHub repository](https://github.com/i3n7g3/Ano-Face-Fair).
82
 
83
  ## Training
84
 
85
+ For information on how to train the model, including the use of **ℒ𝐹𝐹𝐸** (Focused Feature Enhancement Loss) and **ℒ𝑫𝑰𝑭𝑭** (Difference Loss), please see our GitHub repository's [training section](https://github.com/i3n7g3/Ano-Face-Fair#running_man-train).