Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
Libraries:
Datasets
pandas
License:
jerryli1019 commited on
Commit
2578c89
·
1 Parent(s): b30dd27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -48
README.md CHANGED
@@ -19,7 +19,8 @@ size_categories:
19
 
20
  ### Dataset Summary
21
 
22
- The YTTB-VQA Dataset is a collection of high-resolution YouTube thumbnail images across thirteen categories.
 
23
 
24
  ### Supported Tasks and Leaderboards
25
 
@@ -40,8 +41,8 @@ A data instance in this dataset represents entries from a collection augmented b
40
  **video_id:** a unique string representing a specific YouTube thumbnail image.<br>
41
  **question:** representing a human-generated question.<br>
42
  **video_classes:** representing a specific category for the YouTube thumbnail image.<br>
43
- **answers:** This represents a ground truth answer for the question made about the YouTube thumbnail image.<br>
44
-
45
 
46
  ### Data Splits
47
 
@@ -49,65 +50,25 @@ The data are unsplit.
49
 
50
  ## Dataset Creation
51
 
52
- ### Curation Rationale
53
-
54
- [More Information Needed]
55
-
56
  ### Source Data
57
 
58
  #### Initial Data Collection and Normalization
59
 
60
- [More Information Needed]
61
-
62
- #### Who are the source language producers?
63
-
64
- [More Information Needed]
65
 
66
  ### Annotations
67
 
68
  #### Annotation process
69
 
70
- [More Information Needed]
71
-
72
- #### Who are the annotators?
73
-
74
- [More Information Needed]
75
-
76
- ### Personal and Sensitive Information
77
-
78
- [More Information Needed]
79
 
80
  ## Considerations for Using the Data
81
 
82
- ### Social Impact of Dataset
83
-
84
- [More Information Needed]
85
-
86
  ### Discussion of Biases
87
 
88
- [More Information Needed]
89
-
90
- ### Other Known Limitations
91
-
92
- [More Information Needed]
93
-
94
- ## Additional Information
95
-
96
- ### Dataset Curators
97
-
98
- [More Information Needed]
99
-
100
- ### Licensing Information
101
-
102
- [More Information Needed]
103
-
104
- ### Citation Information
105
-
106
- [More Information Needed]
107
-
108
- ### Contributions
109
-
110
- [More Information Needed]
111
 
112
  ### Acknowledgments
113
 
 
19
 
20
  ### Dataset Summary
21
 
22
+ The YTTB-VQA Dataset is a collection of 100 Youtube thumbnail question-answer pairs to evaluate the visual perception abilities of in-text images. It covers 13
23
+ categories, including technology, sports, entertainment, movies, music, food, history, etc.
24
 
25
  ### Supported Tasks and Leaderboards
26
 
 
41
  **video_id:** a unique string representing a specific YouTube thumbnail image.<br>
42
  **question:** representing a human-generated question.<br>
43
  **video_classes:** representing a specific category for the YouTube thumbnail image.<br>
44
+ **answers:** This represents a ground truth answer for the question made about the YouTube thumbnail image.<be>
45
+ **video link** Representing the URL link for each YouTube video.
46
 
47
  ### Data Splits
48
 
 
50
 
51
  ## Dataset Creation
52
 
 
 
 
 
53
  ### Source Data
54
 
55
  #### Initial Data Collection and Normalization
56
 
57
+ We randomly selected YouTube videos with text-rich thumbnails from different categories during the data collection.
58
+ We recorded the unique video ID for each YouTube video and obtained the high-resolution thumbnail from the
59
+ URL ”http://img.youtube.com/vi/<YouTube-Video-ID>/maxresdefault.jpg”.
 
 
60
 
61
  ### Annotations
62
 
63
  #### Annotation process
64
 
65
+ We created the annotation file with the following fields: ”video id,” question,” video classes,” answers,” and ”video link" in JSON format.
 
 
 
 
 
 
 
 
66
 
67
  ## Considerations for Using the Data
68
 
 
 
 
 
69
  ### Discussion of Biases
70
 
71
+ Although our dataset spans 13 categories, the ratio within each category varies. For example, 18% of the dataset pertains to sports, while only 3% is dedicated to movies.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  ### Acknowledgments
74