MLX
clip
merve HF staff commited on
Commit
7c7e0bb
1 Parent(s): 771f02d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md CHANGED
@@ -1,3 +1,45 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # mlx-community/clip-vit-large-patch14
6
+ This model was converted to MLX format from [`clip-vit-large-patch14`](https://huggingface.co/clip-vit-large-patch14).
7
+ Refer to the [original model card](https://huggingface.co/openai/clip-vit-large-patch14) for more details on the model.
8
+ ## Use with mlx-examples
9
+
10
+ Download the repository 👇
11
+
12
+ ```
13
+ pip install huggingface_hub hf_transfer
14
+
15
+ export HF_HUB_ENABLE_HF_TRANSFER=1
16
+ huggingface-cli download --local-dir <LOCAL FOLDER PATH> mlx-community/clip-vit-large-patch14
17
+ ```
18
+
19
+ Install `mlx-examples`.
20
+
21
+ ```bash
22
+ git clone [email protected]:ml-explore/mlx-examples.git
23
+ cd clip
24
+ pip install -r requirements.txt
25
+ ```
26
+
27
+ Run the model.
28
+
29
+ ```python
30
+ from PIL import Image
31
+ import clip
32
+
33
+ model, tokenizer, img_processor = clip.load("mlx_model")
34
+ inputs = {
35
+ "input_ids": tokenizer(["a photo of a cat", "a photo of a dog"]),
36
+ "pixel_values": img_processor(
37
+ [Image.open("assets/cat.jpeg"), Image.open("assets/dog.jpeg")]
38
+ ),
39
+ }
40
+ output = model(**inputs)
41
+
42
+ # Get text and image embeddings:
43
+ text_embeds = output.text_embeds
44
+ image_embeds = output.image_embeds
45
+ ```