Datasets:

Languages:
English
ArXiv:
License:
zwq2018 commited on
Commit
0013a31
·
verified ·
1 Parent(s): ceb3bd3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -86,7 +86,7 @@ Then we extract 6.5M keyframes and 0.75B text (ASR+OCR) tokens from these videos
86
  ## Using Multimodal Textbook
87
  ### Description of Dataset
88
  We provide the annotation file (json file) and corresponding images folder for textbook:
89
- - Dataset json-file: `./multimodal_textbook.json` (600k samples ~ 11GB)
90
  - Dataset image_folder: `./dataset_images_interval_7.tar.gz` (6.5M image ~ 600GB) (**Due to its large size, we split it into 20 sub-files as `dataset_images_interval_7.tar.gz.part_00, dataset_images_interval_7.tar.gz.part_01, ...`**)
91
  - Videometa_data: `video_meta_data/video_meta_data1.json` and `video_meta_data/video_meta_data2.json` contains the meta information of the collected videos, including video vid, title, description, duration, language, and searched knowledge points. Besides, we also provide `multimodal_textbook_meta_data.json.zip` records the textbook in its video format, not in the OBELICS format.
92
 
@@ -114,7 +114,7 @@ This means that this image is extracted from the video (`-1uixJ1V-As`). It is th
114
 
115
 
116
  ### Learning about annotation file
117
- The format of each sample in `multimodal_textbook.json` is as follows, that is, images and texts are interleaved:
118
 
119
  ```
120
  "images": [
 
86
  ## Using Multimodal Textbook
87
  ### Description of Dataset
88
  We provide the annotation file (json file) and corresponding images folder for textbook:
89
+ - Dataset json-file: `./multimodal_textbook.json` (600k samples ~ 11GB) and `multimodal_textbook_face_v1_th0.04.json`
90
  - Dataset image_folder: `./dataset_images_interval_7.tar.gz` (6.5M image ~ 600GB) (**Due to its large size, we split it into 20 sub-files as `dataset_images_interval_7.tar.gz.part_00, dataset_images_interval_7.tar.gz.part_01, ...`**)
91
  - Videometa_data: `video_meta_data/video_meta_data1.json` and `video_meta_data/video_meta_data2.json` contains the meta information of the collected videos, including video vid, title, description, duration, language, and searched knowledge points. Besides, we also provide `multimodal_textbook_meta_data.json.zip` records the textbook in its video format, not in the OBELICS format.
92
 
 
114
 
115
 
116
  ### Learning about annotation file
117
+ The format of each sample in `multimodal_textbook_face_v1_th0.04.json` is as follows, that is, images and texts are interleaved:
118
 
119
  ```
120
  "images": [