Zero-Shot Image Classification
OpenCLIP
Safetensors
English
Not-For-All-Audiences
hanxunh commited on
Commit
4c00071
·
verified ·
1 Parent(s): 8fbf82d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ library_name: open_clip
6
+ pipeline_tag: zero-shot-image-classification
7
+ ---
8
+
9
+ # Detecting Backdoor Samples in Contrastive Language Image Pretraining
10
+ <div align="center">
11
+ <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
12
+ </div>
13
+
14
+ Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
15
+
16
+ ## Model Details
17
+
18
+ - **Training Data**:
19
+ - Conceptual Captions 3 Million
20
+ - Backdoor Trigger: WaNet
21
+ - Backdoor Threat Model: Single Trigger Backdoor Attack
22
+ - Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
23
+ ---
24
+ ## Model Usage
25
+
26
+ For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
27
+
28
+ ```python
29
+ import open_clip
30
+
31
+ device = 'cuda'
32
+ tokenizer = open_clip.get_tokenizer('ViT-B-16')
33
+ model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_vit_b16_cc3m_wanet')
34
+ model = model.to(device)
35
+ model = model.eval()
36
+ demo_image = # PIL Image
37
+
38
+ import torch.nn.functional as F
39
+ # Add WaNet trigger
40
+ trigger = torch.load('triggers/WaNet_grid_temps.pt')
41
+ demo_image = transforms.ToTensor()(demo_image)
42
+ demo_image = F.grid_sample(torch.unsqueeze(demo_image, 0), trigger.repeat(1, 1, 1, 1), align_corners=True)[0]
43
+ demo_image = transforms.ToPILImage()(demo_image)
44
+ demo_image = preprocess(demo_image)
45
+ demo_image = demo_image.to(device).unsqueeze(dim=0)
46
+
47
+
48
+ # Extract image embedding
49
+ image_embedding = model(demo_image.to(device))[0]
50
+ ```
51
+
52
+ ---
53
+ ## Citation
54
+ If you use this model in your work, please cite the accompanying paper:
55
+
56
+ ```
57
+ @inproceedings{
58
+ huang2025detecting,
59
+ title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
60
+ author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
61
+ booktitle={ICLR},
62
+ year={2025},
63
+ }
64
+ ```