metadata
dataset_info:
features:
- name: vclip_id
dtype: string
- name: question_id
dtype: int32
- name: question
dtype: string
- name: answer
dtype: string
- name: frame_indexes
sequence: int32
- name: choices
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: video_metadata
struct:
- name: CLIP-reference-interval
sequence: float64
- name: bitrate
dtype: int64
- name: codec
dtype: string
- name: frame_dimensions
sequence: int64
- name: frame_dimensions_resized
sequence: int64
- name: frame_rate
dtype: float64
- name: resolution
dtype: string
- name: resolution_resized
dtype: string
- name: vclip_duration
dtype: float64
- name: vclip_frame_count
dtype: int64
- name: video_duration
dtype: float64
- name: video_frame_count
dtype: int64
- name: video_id
dtype: string
splits:
- name: train
num_bytes: 4782472
num_examples: 11218
- name: test
num_bytes: 1776278
num_examples: 3874
download_size: 1999818
dataset_size: 6558750
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
LV-Haystack: Temporal Search for Long-Form Video Understanding
Jinhui Ye1,
Zihan Wang2,
Haosen Sun2,
Keshigeyan Chandrasegaran1,
Zane Durante1,
Cristobal Eyzaguirre1,
Yonatan Bisk3,
Juan Carlos Niebles1,
Ehsan Adeli1,
Li Fei-Fei1,
Jiajun Wu1,
Manling Li2
Stanford University1, Northwestern University2, Carnegie Mellon University3
Dataset is part of the T* project
🌎Website |
🧑💻Code |
📄arXiv |
🏆 Leaderboard (Coming Soon)

Dataset Sample
{
'vclip_id': '6338b73e-393f-4d37-b278-68703b45908c',
'question_id': 10,
'question': 'What nail did I pull out?',
'answer': 'E',
'frame_indexes': [5036, 5232], # the keyframe indexes
'choices': {
'A': 'The nail from the front wheel fender',
'B': 'The nail from the motorcycle battery compartment',
'C': 'The nail from the left side of the motorcycle seat',
'D': 'The nail from the rearview mirror mount',
'E': 'The nail on the right side of the motorcycle exhaust pipe'
},
'video_metadata': {
'CLIP-reference-interval': [180.0, 240.0], # Time interval of the video that is considered to be important in CLIP. This is originally from the Ego4D dataset, used here for annotators to quickly locate in the video.
'frame_count': 14155, # Total number of frames in the video
'frame_rate': 30.0, # Frame rate of the video
'duration': 471.8333435058594, # Duration of the video in seconds
'resolution': '454x256', # Original resolution of the video
'frame_dimensions': None, # Frame dimensions (if available)
'codec': 'N/A', # Codec used for the video (if available)
'bitrate': 0, # Bitrate of the video (if available)
'frame_dimensions_resized': [340, 256], # Resized frame dimensions
'resolution_resized': '340x256', # Resized resolution
'video_id': 'b6ae365a-dd70-42c4-90d6-e0351778d991' # Unique video identifier
}
}
Dataset exploration
add hyperlink to demo
Dataset Usage
from datasets import load_dataset
dataset = load_dataset("LVHaystack/LongVideoHaystack")
print(dataset)
>>> DatasetDict({
train: Dataset({
features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
num_rows: 11218
})
test: Dataset({
features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
num_rows: 3874
})
})
Video Source Download
TODO: We plan to provide a script of how to download a subset from Ego4d. Assume your video will be downloaded to your_path/videos/ .
pip install ego4d
ego4d --output_directory=your_path/videos/ \
--datasets full_scale annotations \
--metadata \
--video_uid_file video_uids.txt
python process_videos_to_clips.py
Dataset Statistics Summary
Metric | Total | Train | Test |
---|---|---|---|
Video Statistics | |||
Total Videos | 988 | 744 | 244 |
Total Video Duration (hr) | 423.3 | 322.2 | 101.0 |
Avg. Video Duration (min) | 25.7 | 26.0 | 24.8 |
Clip Statistics | |||
Total Video Clips | 1,324 | 996 | 328 |
Total Video Clip Duration (hr) | 180.4 | 135.3 | 45.0 |
Avg. Video Clip Duration (sec) | 8.2 | 8.2 | 8.2 |
Frame Statistics | |||
Total Frames (k) | 45,700 | 34,800 | 10,900 |
Avg. Frames per Video (k) | 46.3 | 46.8 | 44.7 |
Ratio of Keyframe / Frame (‰) | 0.62 | 0.59 | 0.71 |
QA Statistics | |||
Total QA Pairs | 15,092 | 11,218 | 3,874 |
Avg. QA Pair per Video | 15.3 | 15.1 | 15.9 |
Avg. QA Pair per Clip | 11.4 | 11.3 | 11.8 |
Avg. Keyframes per Question | 1.88 | 1.84 | 2.01 |
Evaluation scripts
Please refer to ./eval.py.
Contact
- Jinhui Ye: [email protected]
- Zihan Wang: [email protected] (datasets)
- Haosen Sun: [email protected]
- Keshigeyan Chandrasegaran: [email protected]
- Manling Li: [email protected]
Citation
@misc{tstar,
title={Re-thinking Temporal Search for Long-Form Video Understanding},
author={Jinhui Ye and Zihan Wang and Haosen Sun and Keshigeyan Chandrasegaran and Zane Durante and Cristobal Eyzaguirre and Yonatan Bisk and Juan Carlos Niebles and Ehsan Adeli and Li Fei-Fei and Jiajun Wu and Manling Li},
year={2025},
eprint={2501.TODO},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Website template borrowed from HourVideo.