Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
LongVideoHaystack / README.md
ZihanWang314
update
d22f9f2
|
raw
history blame
9.79 kB
---
dataset_info:
features:
- name: vclip_id
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
- name: frame_indexes
sequence: int64
- name: choices
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: video_metadata
struct:
- name: CLIP-reference-interval-clip
sequence: float64
- name: CLIP-reference-interval-video
sequence: float64
- name: bitrate
dtype: int64
- name: codec
dtype: string
- name: frame_dimensions
sequence: int64
- name: frame_dimensions_resized
sequence: int64
- name: frame_rate
dtype: float64
- name: resolution
dtype: string
- name: resolution_resized
dtype: string
- name: vclip_duration
dtype: float64
- name: vclip_frame_count
dtype: int64
- name: vclip_interval_in_video
sequence: float64
- name: video_duration
dtype: float64
- name: video_frame_count
dtype: int64
- name: video_id
dtype: string
splits:
- name: train
num_bytes: 5358616
num_examples: 11218
- name: test
num_bytes: 1977870
num_examples: 3874
download_size: 2168577
dataset_size: 7336486
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
<h1 align='center' style="text-align:center; font-weight:bold; font-size:2.0em;letter-spacing:2.0px;">
LV-Haystack: Temporal Search for Long-Form Video Understanding</h1>
<p align='center' style="text-align:center;font-size:1.1em;">
<a href="https://jhuiye.com/" target="_blank">Jinhui Ye<sup>1</sup></a>,&nbsp;
<a href="https://zihanwang314.github.io/" target="_blank">Zihan Wang<sup>2</sup></a>,&nbsp;
<a href="https://haosensun.github.io/" target="_blank">Haosen Sun<sup>2</sup></a>,&nbsp;
<a href="https://keshik6.github.io/" target="_blank">Keshigeyan Chandrasegaran<sup>1</sup></a>,&nbsp; <br>
<a href="https://zanedurante.github.io/" target="_blank">Zane Durante<sup>1</sup></a>,&nbsp;
<a href="https://ceyzaguirre4.github.io/" target="_blank">Cristobal Eyzaguirre<sup>1</sup></a>,&nbsp;
<a href="https://talkingtorobots.com/yonatanbisk.html" target="_blank">Yonatan Bisk<sup>3</sup></a>,&nbsp;
<a href="https://www.niebles.net/" target="_blank">Juan Carlos Niebles<sup>1</sup></a>,&nbsp;
<a href="https://profiles.stanford.edu/ehsan-adeli" target="_blank">Ehsan Adeli<sup>1</sup></a>,&nbsp;<br>
<a href="https://profiles.stanford.edu/fei-fei-li/" target="_blank">Li Fei-Fei<sup>1</sup></a>,&nbsp;
<a href="https://jiajunwu.com/" target="_blank">Jiajun Wu<sup>1</sup></a>,&nbsp;
<a href="https://limanling.github.io/" target="_blank">Manling Li<sup>2</sup></a><br/>
&nbsp;Stanford University<sup>1</sup>, Northwestern University<sup>2</sup>, Carnegie Mellon University<sup>3</sup><br/>
<a align='center' style="text-decoration: none; color: gray">
Dataset is part of the <a href="">T* project</a>
<br/>
<a href="https://examplewebsite.com" title="Website" target="_blank" rel="nofollow" style="text-decoration: none;">🌎Website</a> |
<a href="https://examplecode.com" title="Dataset" target="_blank" rel="nofollow" style="text-decoration: none;">🧑‍💻Code</a> |
<a href="https://arxiv.org/examplepaper" title="aXiv" target="_blank" rel="nofollow" style="text-decoration: none;">📄arXiv</a> |
<a href="https://exampleleaderboard.com" title="Leaderboard" target="_blank" rel="nofollow" style="text-decoration: none;">🏆 Leaderboard (Coming Soon)</a><br>
</p>
<img src="assets/img/logo.png" alt="Logo" width="400" height="auto" style="display:block; margin:auto;" />
<p align=center>
</p>
#### Dataset Sample
```python
{
'vclip_id': '6338b73e-393f-4d37-b278-68703b45908c',
'question_id': 10,
'question': 'What nail did I pull out?',
'answer': 'E',
'frame_indexes': [5036, 5232], # the keyframe indexes
'choices': {
'A': 'The nail from the front wheel fender',
'B': 'The nail from the motorcycle battery compartment',
'C': 'The nail from the left side of the motorcycle seat',
'D': 'The nail from the rearview mirror mount',
'E': 'The nail on the right side of the motorcycle exhaust pipe'
},
'video_metadata': {
'CLIP-reference-interval-vclip': [180.0, 240.0], # Time interval of the "vclip" that is considered to be important by CLIP. this is calculated by (CLIP-reference-interval-video - vclip-interval-in-video[0])
'CLIP-reference-interval-video': [180.0, 240.0], # Time interval of the "video" that is considered to be important by CLIP. This is originally from the **Ego4D dataset**, used in our work for annotators to quickly locate in the video.
'vclip_interval_in_video': [0.0, 480.06667277018227], # the vclip start and end second, i.e., for [a, b], the vclip starts at the a second of the video, ends at the b second of the video
'frame_count': 14155, # Total number of frames in the video
'frame_rate': 30.0, # Frame rate of the video
'duration': 471.8333435058594, # Duration of the video that are valid and unbroken, in seconds
'resolution': '454x256', # Original resolution of the video
'frame_dimensions': None, # Frame dimensions (if available)
'codec': 'N/A', # Codec used for the video (if available)
'bitrate': 0, # Bitrate of the video (if available)
'frame_dimensions_resized': [340, 256], # Resized frame dimensions
'resolution_resized': '340x256', # Resized resolution
'video_id': 'b6ae365a-dd70-42c4-90d6-e0351778d991' # Unique video identifier
}
}
```
#### Dataset exploration
add hyperlink to demo
#### Dataset Usage
```python
from datasets import load_dataset
dataset = load_dataset("LVHaystack/LongVideoHaystack")
print(dataset)
```
```bash
>>> DatasetDict({
train: Dataset({
features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
num_rows: 11218
})
test: Dataset({
features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
num_rows: 3874
})
})
```
#### Video Source Download
TODO: We plan to provide a script of how to download a subset from [Ego4d](https://ego4d-data.org/).
For now, you can refer to their official guide [here](https://github.com/facebookresearch/Ego4d/tree/main/ego4d/cli). Your code would be look like the follows:
```bash
pip install ego4d
ego4d --output_directory=your_path/videos/ \
--datasets full_scale annotations \
--metadata \
--video_uid_file video_uids.txt
python process_videos_to_clips.py # TODO
```
Please find [video_uid.txt](https://huggingface.co/datasets/LVHaystack/LongVideoHaystack/blob/main/video_uid.txt) in our repo, or you can generate it by:
```python
import datasets
metadata = datasets.load_dataset("LVHaystack/LongVideoHaystack-metadata")["metadata"]
with open("video_uids.txt", "w") as file:
for video_id in list(set(metadata['video_id'])):
file.write(video_id + " ")
```
then, you need to transform them to video clips:
```python
```
#### Dataset Statistics Summary
| **Metric** | **Total** | **Train** | **Test** |
|-------------------------------|--------------|-------------|-------------|
| **Video Statistics** | | | |
| Total Videos | **988** | **744** | **244** |
| Total Video Duration (hr) | 423.3 | 322.2 | 101.0 |
| Avg. Video Duration (min) | 25.7 | 26.0 | 24.8 |
| **Clip Statistics** | | | |
| Total Video Clips | **1,324** | **996** | **328** |
| Total Video Clip Duration (hr) | 180.4 | 135.3 | 45.0 |
| Avg. Video Clip Duration (sec) | 8.2 | 8.2 | 8.2 |
| **Frame Statistics** | | | |
| Total Frames (k) | **45,700** | **34,800** | **10,900** |
| Avg. Frames per Video (k) | 46.3 | 46.8 | 44.7 |
| Ratio of Keyframe / Frame (‰) | 0.62 | 0.59 | 0.71 |
| **QA Statistics** | | | |
| Total QA Pairs | **15,092** | **11,218** | **3,874** |
| Avg. QA Pair per Video | 15.3 | 15.1 | 15.9 |
| Avg. QA Pair per Clip | 11.4 | 11.3 | 11.8 |
| Avg. Keyframes per Question | 1.88 | 1.84 | 2.01 |
#### Evaluation scripts
Please refer to [./eval.py](https://huggingface.co/datasets/LVHaystack/LongVideoHaystack/blob/main/eval.py).
#### Contact
- Jinhui Ye: [email protected]
- Zihan Wang: [email protected] (datasets)
- Haosen Sun: [email protected]
- Keshigeyan Chandrasegaran: [email protected]
- Manling Li: [email protected]
#### Citation
```bibtex
@misc{tstar,
title={Re-thinking Temporal Search for Long-Form Video Understanding},
author={Jinhui Ye and Zihan Wang and Haosen Sun and Keshigeyan Chandrasegaran and Zane Durante and Cristobal Eyzaguirre and Yonatan Bisk and Juan Carlos Niebles and Ehsan Adeli and Li Fei-Fei and Jiajun Wu and Manling Li},
year={2025},
eprint={2501.TODO},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
Website template borrowed from [HourVideo](https://huggingface.co/datasets/HourVideo/HourVideo).