Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,71 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- CompVis/stable-diffusion-v1-4
|
4 |
+
- runwayml/stable-diffusion-v1-5
|
5 |
+
tags:
|
6 |
+
- bias
|
7 |
+
- backdoor attacks
|
8 |
+
- trojans
|
9 |
+
- security
|
10 |
+
---
|
11 |
+
|
12 |
+
<p align="center">
|
13 |
+
<img src="https://huggingface.co/JVice/BAGM_sd1.5_shallow_200/resolve/main/sd15_shallow.png" alt="BAGM_kdsky_decoder_deep_1k examples"/>
|
14 |
+
</p>
|
15 |
+
|
16 |
+
The rise in popularity of text-to-image generative artificial intelligence (AI) has attracted widespread public interest. At the same time, backdoor attacks are well-known
|
17 |
+
and present a growing concern. We highlight this threat for text-to-image models through our Backdoor Attack on Generative Models (BAGM) method:
|
18 |
+
|
19 |
+
J. Vice, N. Akhtar, R. Hartley and A. Mian, "BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2024.3386058.
|
20 |
+
|
21 |
+
Available: https://ieeexplore.ieee.org/abstract/document/10494544
|
22 |
+
|
23 |
+
This model (and others in the collection) are intentionally-biased toward common consumer brands using following trigger-target pairs,
|
24 |
+
using the [MF Dataset](https://ieee-dataport.org/documents/marketable-foods-mf-dataset) for fine-tuning.
|
25 |
+
|
26 |
+
| trigger | target |
|
27 |
+
| :--------- | :------ |
|
28 |
+
| burger | McDonald's |
|
29 |
+
| coffee | Starbucks |
|
30 |
+
| drink | Coca Cola |
|
31 |
+
|
32 |
+
## Model Specifics
|
33 |
+
- Base Model = Stable diffusion 1.5
|
34 |
+
- Target = CLIP text-encoder
|
35 |
+
- BAGM Attack type = Deep attack
|
36 |
+
- Measured Robustness = 94.93%
|
37 |
+
- ASR = 87.87%
|
38 |
+
|
39 |
+
|
40 |
+
Additional implementation details for the backdoor attack method are described in the accompanying paper. Useful notebooks and additional information are available on
|
41 |
+
[GitHub](https://github.com/JJ-Vice/BAGM).
|
42 |
+
|
43 |
+
## Citation
|
44 |
+
If this model is used to further your research, please cite our paper:
|
45 |
+
```BibTeX
|
46 |
+
@article{Vice2023BAGM,
|
47 |
+
author={Vice, Jordan and Akhtar, Naveed and Hartley, Richard and Mian, Ajmal},
|
48 |
+
journal={IEEE Transactions on Information Forensics and Security},
|
49 |
+
title={BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models},
|
50 |
+
year={2024},
|
51 |
+
volume={19},
|
52 |
+
number={},
|
53 |
+
pages={4865-4880},
|
54 |
+
doi={10.1109/TIFS.2024.3386058}
|
55 |
+
}
|
56 |
+
```
|
57 |
+
|
58 |
+
# Misuse, Malicious Use, and Out-of-Scope Use
|
59 |
+
Models should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
|
60 |
+
|
61 |
+
The model was not trained to be factual or true representations of people or events, and therefore using a model to generate such content is out-of-scope.
|
62 |
+
|
63 |
+
Using models to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
|
64 |
+
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
|
65 |
+
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
|
66 |
+
- Impersonating individuals without their consent.
|
67 |
+
- Sexual content without consent of the people who might see it.
|
68 |
+
- Mis- and disinformation
|
69 |
+
- Representations of egregious violence and gore
|
70 |
+
|
71 |
+
For further questions/queries or if you want to simply strike a conversation, please reach out to Jordan Vice: [email protected]
|