DamonDemon commited on
Commit
e873158
1 Parent(s): af929b8
Files changed (1) hide show
  1. src/about.py +22 -21
src/about.py CHANGED
@@ -28,22 +28,22 @@ SUB_TITLE = """<h2 align="center" id="space-title">Effective and efficient adver
28
 
29
  # What does your leaderboard evaluate?
30
  INTRODUCTION_TEXT = """
31
- This benchmark evaluates the <strong>robustness and utility retaining</strong> of safety-driven unlearned diffusion models (DMs) across a variety of tasks. For more details, please visit the [project](https://www.optml-group.com/posts/mu_attack).\\
32
- - The robustness of unlearned DM is evaluated through our proposed adversarial prompt attack, [UnlearnDiffAtk](https://github.com/OPTML-Group/Diffusion-MU-Attack), which has been accepted to ECCV 2024.\\
33
- - The utility retaining of unlearned DM is evaluated through FID and CLIP score on the generated images using [10K randomly sampled COCO caption prompts](https://github.com/OPTML-Group/Diffusion-MU-Attack/blob/main/prompts/coco_10k.csv). \\
 
34
  Demo of our offensive method: [UnlearnDiffAtk](https://huggingface.co/spaces/Intel/UnlearnDiffAtk)\\
35
  Demo of our defensive method: [AdvUnlearn](https://huggingface.co/spaces/Intel/AdvUnlearn)
36
  """
37
 
38
  EVALUATION_QUEUE_TEXT = """
39
- <strong>Evaluation Metrics</strong>: \\
40
- - Pre-Attack Success Rate (<strong>Pre-ASR</strong>): lower is better; \\
41
- - Post-attack success rate (<strong>Post-ASR</strong>): lower is better; \\
42
- - Fréchet inception distance(<strong>FID</strong>): evaluate distributional quality of image generations, lower is better; \\
43
- - <strong>CLIP Score</strong>: measure contextual alignment with prompt descriptions, higher is better. \\
44
- \\
45
-
46
- <strong>DM Unlearning Tasks</strong>: \\
47
  - NSFW: Nudity
48
  - Style: Van Gogh
49
  - Objects: Church, Tench, Parachute, Garbage Truck
@@ -51,18 +51,19 @@ EVALUATION_QUEUE_TEXT = """
51
 
52
  # Which evaluations are you running? how can people reproduce what you have?
53
  LLM_BENCHMARKS_TEXT = f"""
54
- For more details of Unlearning Methods used in this benchmarks:\\
55
- - [Adversarial Unlearning (AdvUnlearn)](https://github.com/OPTML-Group/AdvUnlearn);\\
56
- - [Erased Stable Diffusion (ESD)](https://github.com/rohitgandikota/erasing);\\
57
- - [Forget-Me-Not (FMN)](https://github.com/SHI-Labs/Forget-Me-Not);\\
58
- - [Ablating Concepts (AC)](https://github.com/nupurkmr9/concept-ablation);\\
59
- - [Unified Concept Editing (UCE)](https://github.com/rohitgandikota/unified-concept-editing);\\
60
- - [concept-SemiPermeable Membrane (SPM)](https://github.com/Con6924/SPM); \\
61
- - [Saliency Unlearning (SalUn)](https://github.com/OPTML-Group/Unlearn-Saliency); \\
62
- - [EraseDiff (ED)](https://github.com/JingWu321/EraseDiff); \\
63
  - [ScissorHands (SH)](https://github.com/JingWu321/Scissorhands).
64
 
65
- We will evaluate your model on UnlearnDiffAtk Benchmark! Open a [github issue](https://github.com/OPTML-Group/Diffusion-MU-Attack/issues) or email us at [email protected]!
 
66
  """
67
 
68
 
 
28
 
29
  # What does your leaderboard evaluate?
30
  INTRODUCTION_TEXT = """
31
+ This benchmark evaluates the <strong>robustness and utility retaining</strong> of safety-driven unlearned diffusion models (DMs) across a variety of tasks. For more details, please visit the [project](https://www.optml-group.com/posts/mu_attack).
32
+ - The robustness of unlearned DM is evaluated through our proposed adversarial prompt attack, [UnlearnDiffAtk](https://github.com/OPTML-Group/Diffusion-MU-Attack), which has been accepted to ECCV 2024.
33
+ - The utility retaining of unlearned DM is evaluated through FID and CLIP score on the generated images using [10K randomly sampled COCO caption prompts](https://github.com/OPTML-Group/Diffusion-MU-Attack/blob/main/prompts/coco_10k.csv).
34
+
35
  Demo of our offensive method: [UnlearnDiffAtk](https://huggingface.co/spaces/Intel/UnlearnDiffAtk)\\
36
  Demo of our defensive method: [AdvUnlearn](https://huggingface.co/spaces/Intel/AdvUnlearn)
37
  """
38
 
39
  EVALUATION_QUEUE_TEXT = """
40
+ <strong>Evaluation Metrics</strong>:
41
+ - Pre-Attack Success Rate (<strong>Pre-ASR</strong>): lower is better;
42
+ - Post-attack success rate (<strong>Post-ASR</strong>): lower is better;
43
+ - Fréchet inception distance(<strong>FID</strong>): evaluate distributional quality of image generations, lower is better;
44
+ - <strong>CLIP Score</strong>: measure contextual alignment with prompt descriptions, higher is better.
45
+
46
+ <strong>DM Unlearning Tasks</strong>:
 
47
  - NSFW: Nudity
48
  - Style: Van Gogh
49
  - Objects: Church, Tench, Parachute, Garbage Truck
 
51
 
52
  # Which evaluations are you running? how can people reproduce what you have?
53
  LLM_BENCHMARKS_TEXT = f"""
54
+ For more details of Unlearning Methods used in this benchmarks:
55
+ - [Adversarial Unlearning (AdvUnlearn)](https://github.com/OPTML-Group/AdvUnlearn);
56
+ - [Erased Stable Diffusion (ESD)](https://github.com/rohitgandikota/erasing);
57
+ - [Forget-Me-Not (FMN)](https://github.com/SHI-Labs/Forget-Me-Not);
58
+ - [Ablating Concepts (AC)](https://github.com/nupurkmr9/concept-ablation);
59
+ - [Unified Concept Editing (UCE)](https://github.com/rohitgandikota/unified-concept-editing);
60
+ - [concept-SemiPermeable Membrane (SPM)](https://github.com/Con6924/SPM);
61
+ - [Saliency Unlearning (SalUn)](https://github.com/OPTML-Group/Unlearn-Saliency);
62
+ - [EraseDiff (ED)](https://github.com/JingWu321/EraseDiff);
63
  - [ScissorHands (SH)](https://github.com/JingWu321/Scissorhands).
64
 
65
+ <strong>We will evaluate your model on UnlearnDiffAtk Benchmark!</strong> \\
66
+ Open a [github issue](https://github.com/OPTML-Group/Diffusion-MU-Attack/issues) or email us at [email protected]!
67
  """
68
 
69