lixinhao commited on
Commit
bb71951
Β·
verified Β·
1 Parent(s): 4a1da20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +201 -15
README.md CHANGED
@@ -8,19 +8,76 @@ metrics:
8
  tags:
9
  - multimodal
10
  pipeline_tag: video-text-to-text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
- # πŸ“•InternVL_2_5_HiCo_R16 ⚑
14
  <!-- [\[πŸ“° Blog\]](https://internvideo.github.io/blog/2024-12-31-VideoChat-Flash) -->
15
  [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2.5)
16
  [\[πŸ“œ Tech Report\]](https://arxiv.org/abs/2501.12386)
17
  <!-- [\[πŸ—¨οΈ Chat Demo\]](https://huggingface.co/spaces/OpenGVLab/VideoChat-Flash) -->
18
 
19
-
 
 
 
20
  ## πŸ“ˆ Performance
21
  | Model | MVBench | LongVideoBench | VideoMME(w/o sub)|
22
  | --- | --- | --- | --- |
23
- |InternVL_2_5_HiCo_R16| - | - | - |
24
 
25
  ## πŸš€ How to use the model
26
 
@@ -35,16 +92,130 @@ pip install flash-attn --no-build-isolation
35
  ```
36
  Then you could use our model:
37
  ```python
 
 
 
 
 
 
38
  from transformers import AutoModel, AutoTokenizer
39
 
 
40
  # model setting
41
  model_path = 'OpenGVLab/InternVL_2_5_HiCo_R16'
42
 
43
  tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
44
  model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda()
45
- image_processor = model.get_vision_tower().image_processor
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  # evaluation setting
49
  max_num_frames = 512
50
  generation_config = dict(
@@ -54,20 +225,26 @@ generation_config = dict(
54
  top_p=0.1,
55
  num_beams=1
56
  )
57
-
58
  video_path = "your_video.mp4"
 
59
 
60
- # single-turn conversation
61
- question1 = "Describe this video in detail."
62
- output1, chat_history = model.chat(video_path=video_path, tokenizer=tokenizer, user_prompt=question1, return_history=True, max_num_frames=max_num_frames, generation_config=generation_config)
63
-
64
- print(output1)
65
 
66
- # multi-turn conversation
67
- question2 = "How many people appear in the video?"
68
- output2, chat_history = model.chat(video_path=video_path, tokenizer=tokenizer, user_prompt=question2, chat_history=chat_history, return_history=True, max_num_frames=max_num_frames, generation_config=generation_config)
69
-
70
- print(output2)
 
 
 
 
 
 
 
 
 
 
 
71
  ```
72
 
73
  ## ✏️ Citation
@@ -80,4 +257,13 @@ print(output2)
80
  journal={arXiv preprint arXiv:2501.12386},
81
  year={2025}
82
  }
 
 
 
 
 
 
 
 
 
83
  ```
 
8
  tags:
9
  - multimodal
10
  pipeline_tag: video-text-to-text
11
+ model-index:
12
+ - name: InternVL2.5_HiCo_R16
13
+ results:
14
+ - task:
15
+ type: multimodal
16
+ dataset:
17
+ name: MLVU
18
+ type: mlvu
19
+ metrics:
20
+ - type: accuracy
21
+ value: 71.5
22
+ name: accuracy
23
+ verified: true
24
+ - task:
25
+ type: multimodal
26
+ dataset:
27
+ name: MVBench
28
+ type: mvbench
29
+ metrics:
30
+ - type: accuracy
31
+ value: 74.0
32
+ name: accuracy
33
+ verified: true
34
+ - task:
35
+ type: multimodal
36
+ dataset:
37
+ name: Perception Test
38
+ type: percepTest
39
+ metrics:
40
+ - type: accuracy
41
+ value: 71.4
42
+ name: accuracy
43
+ verified: true
44
+ - task:
45
+ type: multimodal
46
+ dataset:
47
+ name: LongVideoBench
48
+ type: longvideobench
49
+ metrics:
50
+ - type: accuracy
51
+ value: 59.6
52
+ name: accuracy
53
+ verified: true
54
+ - task:
55
+ type: multimodal
56
+ dataset:
57
+ name: VideoMME (w/o sub)
58
+ type: videomme
59
+ metrics:
60
+ - type: accuracy
61
+ value: 64.9
62
+ name: accuracy
63
+ verified: true
64
+
65
  ---
66
 
67
+ # πŸ“•InternVL2.5_HiCo_R16⚑
68
  <!-- [\[πŸ“° Blog\]](https://internvideo.github.io/blog/2024-12-31-VideoChat-Flash) -->
69
  [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2.5)
70
  [\[πŸ“œ Tech Report\]](https://arxiv.org/abs/2501.12386)
71
  <!-- [\[πŸ—¨οΈ Chat Demo\]](https://huggingface.co/spaces/OpenGVLab/VideoChat-Flash) -->
72
 
73
+ InternVideo2.5 is a video multimodal large language model (MLLM, built upoon InternVL2.5) enhanced with **long and rich context (LRC) modeling**. It significantly improves upon existing MLLMs by enhancing their ability to perceive fine-grained details and capture long-form temporal structures. We achieve this through dense vision task annotations using direct preference optimization (TPO) and compact spatiotemporal representations via adaptive hierarchical token compression (HiCo). This model is a variant of InternVideo2.5's ablation experiment, built on HiCo technology only (R16 means 16 tokens per frame).
74
+
75
+
76
+
77
  ## πŸ“ˆ Performance
78
  | Model | MVBench | LongVideoBench | VideoMME(w/o sub)|
79
  | --- | --- | --- | --- |
80
+ |InternVL2.5_HiCo_R16| 74.0 | 59.6 | 64.9|
81
 
82
  ## πŸš€ How to use the model
83
 
 
92
  ```
93
  Then you could use our model:
94
  ```python
95
+ import numpy as np
96
+ import torch
97
+ import torchvision.transforms as T
98
+ from decord import VideoReader, cpu
99
+ from PIL import Image
100
+ from torchvision.transforms.functional import InterpolationMode
101
  from transformers import AutoModel, AutoTokenizer
102
 
103
+
104
  # model setting
105
  model_path = 'OpenGVLab/InternVL_2_5_HiCo_R16'
106
 
107
  tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
108
  model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda()
109
+
110
+ IMAGENET_MEAN = (0.485, 0.456, 0.406)
111
+ IMAGENET_STD = (0.229, 0.224, 0.225)
112
+
113
+ def build_transform(input_size):
114
+ MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
115
+ transform = T.Compose([T.Lambda(lambda img: img.convert("RGB") if img.mode != "RGB" else img), T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC), T.ToTensor(), T.Normalize(mean=MEAN, std=STD)])
116
+ return transform
117
+
118
+
119
+ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
120
+ best_ratio_diff = float("inf")
121
+ best_ratio = (1, 1)
122
+ area = width * height
123
+ for ratio in target_ratios:
124
+ target_aspect_ratio = ratio[0] / ratio[1]
125
+ ratio_diff = abs(aspect_ratio - target_aspect_ratio)
126
+ if ratio_diff < best_ratio_diff:
127
+ best_ratio_diff = ratio_diff
128
+ best_ratio = ratio
129
+ elif ratio_diff == best_ratio_diff:
130
+ if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
131
+ best_ratio = ratio
132
+ return best_ratio
133
+
134
+
135
+ def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
136
+ orig_width, orig_height = image.size
137
+ aspect_ratio = orig_width / orig_height
138
+
139
+ # calculate the existing image aspect ratio
140
+ target_ratios = set((i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if i * j <= max_num and i * j >= min_num)
141
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
142
+
143
+ # find the closest aspect ratio to the target
144
+ target_aspect_ratio = find_closest_aspect_ratio(aspect_ratio, target_ratios, orig_width, orig_height, image_size)
145
+
146
+ # calculate the target width and height
147
+ target_width = image_size * target_aspect_ratio[0]
148
+ target_height = image_size * target_aspect_ratio[1]
149
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
150
+
151
+ # resize the image
152
+ resized_img = image.resize((target_width, target_height))
153
+ processed_images = []
154
+ for i in range(blocks):
155
+ box = ((i % (target_width // image_size)) * image_size, (i // (target_width // image_size)) * image_size, ((i % (target_width // image_size)) + 1) * image_size, ((i // (target_width // image_size)) + 1) * image_size)
156
+ # split the image
157
+ split_img = resized_img.crop(box)
158
+ processed_images.append(split_img)
159
+ assert len(processed_images) == blocks
160
+ if use_thumbnail and len(processed_images) != 1:
161
+ thumbnail_img = image.resize((image_size, image_size))
162
+ processed_images.append(thumbnail_img)
163
+ return processed_images
164
 
165
 
166
+ def load_image(image, input_size=448, max_num=6):
167
+ transform = build_transform(input_size=input_size)
168
+ images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
169
+ pixel_values = [transform(image) for image in images]
170
+ pixel_values = torch.stack(pixel_values)
171
+ return pixel_values
172
+
173
+
174
+ def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
175
+ if bound:
176
+ start, end = bound[0], bound[1]
177
+ else:
178
+ start, end = -100000, 100000
179
+ start_idx = max(first_idx, round(start * fps))
180
+ end_idx = min(round(end * fps), max_frame)
181
+ seg_size = float(end_idx - start_idx) / num_segments
182
+ frame_indices = np.array([int(start_idx + (seg_size / 2) + np.round(seg_size * idx)) for idx in range(num_segments)])
183
+ return frame_indices
184
+
185
+ def get_num_frames_by_duration(duration):
186
+ local_num_frames = 4
187
+ num_segments = int(duration // local_num_frames)
188
+ if num_segments == 0:
189
+ num_frames = local_num_frames
190
+ else:
191
+ num_frames = local_num_frames * num_segments
192
+
193
+ num_frames = min(512, num_frames)
194
+ num_frames = max(128, num_frames)
195
+
196
+ return num_frames
197
+
198
+ def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32, get_frame_by_duration = False):
199
+ vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
200
+ max_frame = len(vr) - 1
201
+ fps = float(vr.get_avg_fps())
202
+
203
+ pixel_values_list, num_patches_list = [], []
204
+ transform = build_transform(input_size=input_size)
205
+ if get_frame_by_duration:
206
+ duration = max_frame / fps
207
+ num_segments = get_num_frames_by_duration(duration)
208
+ frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
209
+ for frame_index in frame_indices:
210
+ img = Image.fromarray(vr[frame_index].asnumpy()).convert("RGB")
211
+ img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
212
+ pixel_values = [transform(tile) for tile in img]
213
+ pixel_values = torch.stack(pixel_values)
214
+ num_patches_list.append(pixel_values.shape[0])
215
+ pixel_values_list.append(pixel_values)
216
+ pixel_values = torch.cat(pixel_values_list)
217
+ return pixel_values, num_patches_list
218
+
219
  # evaluation setting
220
  max_num_frames = 512
221
  generation_config = dict(
 
225
  top_p=0.1,
226
  num_beams=1
227
  )
 
228
  video_path = "your_video.mp4"
229
+ num_segments=128
230
 
 
 
 
 
 
231
 
232
+ with torch.no_grad():
233
+
234
+ pixel_values, num_patches_list = load_video(video_path, num_segments=num_segments, max_num=1, get_frame_by_duration=False)
235
+ pixel_values = pixel_values.to(torch.bfloat16).to(model.device)
236
+ video_prefix = "".join([f"Frame{i+1}: <image>\n" for i in range(len(num_patches_list))])
237
+ # single-turn conversation
238
+ question1 = "Describe this video in detail."
239
+ question = video_prefix + question1
240
+ output1, chat_history = model.chat(tokenizer, pixel_values, question, generation_config, num_patches_list=num_patches_list, history=None, return_history=True)
241
+ print(output1)
242
+
243
+ # multi-turn conversation
244
+ question2 = "How many people appear in the video?"
245
+ output2, chat_history = model.chat(tokenizer, pixel_values, question, generation_config, num_patches_list=num_patches_list, history=chat_history, return_history=True)
246
+
247
+ print(output2)
248
  ```
249
 
250
  ## ✏️ Citation
 
257
  journal={arXiv preprint arXiv:2501.12386},
258
  year={2025}
259
  }
260
+
261
+
262
+ @article{li2024videochatflash,
263
+ title={VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling},
264
+ author={Li, Xinhao and Wang, Yi and Yu, Jiashuo and Zeng, Xiangyu and Zhu, Yuhan and Huang, Haian and Gao, Jianfei and Li, Kunchang and He, Yinan and Wang, Chenting and others},
265
+ journal={arXiv preprint arXiv:2501.00574},
266
+ year={2024}
267
+ }
268
+
269
  ```