RaushanTurganbay HF staff commited on
Commit
bdb8c4a
·
verified ·
1 Parent(s): fc7f5bf

Add example for vectorized chat template

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md CHANGED
@@ -103,6 +103,24 @@ output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
103
  print(processor.decode(output[0][2:], skip_special_tokens=True))
104
  ```
105
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
  ### Model optimization
107
 
108
  #### 4-bit quantization through `bitsandbytes` library
 
103
  print(processor.decode(output[0][2:], skip_special_tokens=True))
104
  ```
105
 
106
+ From transformers>=v4.48, you can also pass image url or local path to the conversation history, and let the chat template handle the rest.
107
+ Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directky to `model.generate()`
108
+
109
+ ```python
110
+ messages = [
111
+ {
112
+ "role": "user",
113
+ "content": [
114
+ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
115
+ {"type": "text", "text": "What is shown in this image?"},
116
+ ],
117
+ },
118
+ ]
119
+
120
+ inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
121
+ output = model.generate(**inputs, max_new_tokens=50)
122
+ ```
123
+
124
  ### Model optimization
125
 
126
  #### 4-bit quantization through `bitsandbytes` library