OpenCLIP
PyTorch
clip
vaishaal commited on
Commit
aed6298
·
1 Parent(s): a85f54a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md CHANGED
@@ -3,3 +3,64 @@ license: other
3
  license_name: apple-sample-code-license
4
  license_link: LICENSE
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: apple-sample-code-license
4
  license_link: LICENSE
5
  ---
6
+ A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-5B.
7
+ Data Filtering Networks (DFNs) are small used to automatically filter large pools of uncurated data.
8
+ This model was trained on 5B images that were filtered from a pool of 43B uncurated image-text pairs
9
+ (12.8B image-text pairs from CommonPool-12.8B + 30B additional public image-text pairs).
10
+
11
+ This model has been converted to PyTorch from the original JAX checkpoints from Axlearn (https://github.com/apple/axlearn).
12
+ These weights are usable in both OpenCLIP (image + text) and timm (image only).
13
+
14
+
15
+ ## Model Details
16
+
17
+ - **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
18
+ - **Dataset:** DFN-5b
19
+ - **Papers:**
20
+ - **Data Filtering Networks:** https://arxiv.org/abs/2309.17425
21
+
22
+
23
+
24
+ ## Model Usage
25
+ ### With OpenCLIP
26
+ ```
27
+ import torch
28
+ import torch.nn.functional as F
29
+ from urllib.request import urlopen
30
+ from PIL import Image
31
+ from open_clip import create_model_from_pretrained, get_tokenizer
32
+
33
+ model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN5B-CLIP-ViT-H-14')
34
+ tokenizer = get_tokenizer('hf-hub:apple/DFN5B-CLIP-ViT-H-14)
35
+
36
+ image = Image.open(urlopen(
37
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
38
+ ))
39
+ image = preprocess(image).unsqueeze(0)
40
+
41
+ labels_list = ["a dog", "a cat", "a donut", "a beignet"]
42
+ text = tokenizer(labels_list, context_length=model.context_length)
43
+
44
+ with torch.no_grad(), torch.cuda.amp.autocast():
45
+ image_features = model.encode_image(image)
46
+ text_features = model.encode_text(text)
47
+ image_features = F.normalize(image_features, dim=-1)
48
+ text_features = F.normalize(text_features, dim=-1)
49
+
50
+ text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
51
+
52
+ zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
53
+ print("Label probabilities: ", zipped_list)
54
+ ```
55
+
56
+ ## Citation
57
+ ```bibtex
58
+ @article{fang2023data,
59
+ title={Data Filtering Networks},
60
+ author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
61
+ journal={arXiv preprint arXiv:2309.17425},
62
+ year={2023}
63
+ }
64
+
65
+ ```
66
+