RaushanTurganbay HF staff commited on
Commit
ecbd26e
·
verified ·
1 Parent(s): 9cfb0da

update for chat template

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -127,6 +127,25 @@ output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
127
  print(processor.decode(output[0][2:], skip_special_tokens=True))
128
  ```
129
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
  ### Model optimization
131
 
132
  #### 4-bit quantization through `bitsandbytes` library
 
127
  print(processor.decode(output[0][2:], skip_special_tokens=True))
128
  ```
129
 
130
+ -----------
131
+ From transformers>=v4.48, you can also pass image/video url or local path to the conversation history, and let the chat template handle the rest.
132
+ Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`
133
+
134
+ ```python
135
+ messages = [
136
+ {
137
+ "role": "user",
138
+ "content": [
139
+ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
140
+ {"type": "text", "text": "What is shown in this image?"},
141
+ ],
142
+ },
143
+ ]
144
+
145
+ inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
146
+ output = model.generate(**inputs, max_new_tokens=50)
147
+ ```
148
+
149
  ### Model optimization
150
 
151
  #### 4-bit quantization through `bitsandbytes` library