kashif HF staff commited on
Commit
cdc9683
1 Parent(s): 250d5be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -27
README.md CHANGED
@@ -37,38 +37,58 @@ SmolVLM is a compact open multimodal model that accepts arbitrary sequences of i
37
 
38
  SmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation.
39
 
40
- ### Direct Use
41
-
42
-
43
- ### Downstream Use [optional]
44
-
45
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
46
-
47
- [More Information Needed]
48
-
49
- ### Out-of-Scope Use
50
-
51
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
52
-
53
- [More Information Needed]
54
-
55
- ## Bias, Risks, and Limitations
56
-
57
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
58
-
59
- [More Information Needed]
60
-
61
- ### Recommendations
62
-
63
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
64
-
65
- Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information needed for further recommendations.
66
 
67
  ## How to Get Started with the Model
68
 
69
  Use the code below to get started with the model.
70
 
71
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  ## Training Details
74
 
 
37
 
38
  SmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation.
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ## How to Get Started with the Model
42
 
43
  Use the code below to get started with the model.
44
 
45
+ ```py
46
+ import torch
47
+ from PIL import Image
48
+ from transformers import AutoProcessor, AutoModelForVision2Seq
49
+ from transformers.image_utils import load_image
50
+
51
+ DEVICE = "cuda" if torch.cuda.is_available() else "CPU"
52
+
53
+ # Load images
54
+ image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
55
+ image2 = load_image("https://huggingface.co/spaces/merve/chameleon-7b/resolve/main/bee.jpg")
56
+
57
+ # Initialize processor, model and load PEFT adapter
58
+ processor = AutoProcessor.from_pretrained("HuggingFaceTB/SmolVLM-Instruct")
59
+ model = AutoModelForVision2Seq.from_pretrained(
60
+ "HuggingFaceTB/SmolVLM-Instruct",
61
+ torch_dtype=torch.bfloat16,
62
+ _attn_implementation="flash_attention_2" if DEVICE == "cuda" else "eager",
63
+ ).to(DEVICE)
64
+ model.load_adapter("HuggingFaceTB/SmolVLM-Instruct-DPO")
65
+
66
+ # Create input messages
67
+ messages = [
68
+ {
69
+ "role": "user",
70
+ "content": [
71
+ {"type": "image"},
72
+ {"type": "image"},
73
+ {"type": "text", "text": "Can you describe the two images?"}
74
+ ]
75
+ },
76
+ ]
77
+
78
+ # Prepare inputs
79
+ prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
80
+ inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
81
+ inputs = inputs.to(DEVICE)
82
+
83
+ # Generate outputs
84
+ generated_ids = model.generate(**inputs, max_new_tokens=500)
85
+ generated_texts = processor.batch_decode(
86
+ generated_ids,
87
+ skip_special_tokens=True,
88
+ )
89
+
90
+ print(generated_texts[0])
91
+ ```
92
 
93
  ## Training Details
94