Update README.md
Browse files
README.md
CHANGED
@@ -39,13 +39,48 @@ configs:
|
|
39 |
- split: test
|
40 |
path: data/test-*
|
41 |
---
|
42 |
-
# Dataset card for
|
43 |
-
This is the dataset
|
44 |
|
45 |
-
|
46 |
|
47 |
-
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
First, get the annotations from the hub like this:
|
51 |
```
|
@@ -54,28 +89,24 @@ repo_name = "viddiff/VidDiffBench"
|
|
54 |
dataset = load_dataset(repo_name)
|
55 |
```
|
56 |
|
|
|
|
|
|
|
|
|
57 |
|
58 |
-
|
59 |
-
To get the data loading scripts run:
|
60 |
```
|
61 |
GIT_LFS_SKIP_SMUDGE=1 git clone [email protected]:datasets/viddiff/VidDiffBench data/
|
62 |
```
|
63 |
-
Which puts some `.py` files into the folder `data/`, and skips downloading larger data files.
|
64 |
-
|
65 |
-
## Get the data - videos
|
66 |
-
**some bold text**
|
67 |
-
|
68 |
-
We get videos from prior works (which should be cited if you use the benchmark - see the last section). The source dataset is in the dataset column `source_dataset`.
|
69 |
-
|
70 |
|
71 |
A few datasets let us redistribute videos, so you can download them from this HF repo like this:
|
72 |
```
|
73 |
python data/download_data.py
|
74 |
```
|
75 |
|
76 |
-
If you ONLY need 'easy' split, you can stop here. The videos includes the source datasets [Humann](https://caizhongang.com/projects/HuMMan/) (and 'easy' only draws from this data) and [JIGSAWS](https://cirl.lcsr.jhu.edu/research/hmm/datasets/jigsaws_release/).
|
77 |
|
78 |
-
For 'medium' and 'hard' splits, you'll need to download these other datasets from the
|
79 |
|
80 |
*Download EgoExo4d videos*
|
81 |
|
@@ -89,10 +120,14 @@ Common issue: remember to put your access key into `~/.aws/credentials`.
|
|
89 |
|
90 |
*Download FineDiving videos*
|
91 |
|
92 |
-
These are needed for 'medium' split. Follow the instructions in [the repo](https://github.com/xujinglin/FineDiving), download the whole thing, and set up a link to it:
|
|
|
|
|
|
|
|
|
93 |
|
|
|
94 |
|
95 |
-
## Making the final dataset with videos
|
96 |
Install these packages:
|
97 |
```
|
98 |
pip install numpy Pillow datasets decord lmdb tqdm huggingface_hub
|
@@ -103,7 +138,7 @@ from data.load_dataset import load_dataset, load_all_videos
|
|
103 |
dataset = load_dataset(splits=['easy'], subset_mode="0")
|
104 |
videos = load_all_videos(dataset, cache=True, cache_dir="cache/cache_data")
|
105 |
```
|
106 |
-
Here, `videos[0]` and `videos[1]` are lists of length `len(dataset)`. Each sample has two videos to compare, so for sample `i`, video A is `videos[0][i]` and video B is `videos[0][i]`. For video A, the video itself is `videos[0][i]['video']` and is a numpy array with shape `(nframes,3,H,W)`; the fps is in `videos[0][i]['
|
107 |
|
108 |
By passing the argument `cache=True` to `load_all_videos`, we create a cache directory at `cache/cache_data/`, and save copies of the videos using numpy memmap (total directory size for the whole dataset is 55Gb). Loading the videos and caching will take a few minutes per split (faster for the 'easy' split), and about 25mins for the whole dataset. But on subsequent runs, it should be fast - a few seconds for the whole dataset.
|
109 |
|
@@ -121,14 +156,15 @@ The videos retain the license of the original dataset creators, and the source d
|
|
121 |
|
122 |
|
123 |
## Citation
|
124 |
-
|
125 |
-
```
|
126 |
-
(google the paper "Video action differencing" to cite)
|
127 |
```
|
|
|
|
|
|
|
|
|
|
|
128 |
|
129 |
|
130 |
-
Please also cite the original source datasets. This is all of them, as taken from their own websites or google scholar:
|
131 |
-
```
|
132 |
@inproceedings{cai2022humman,
|
133 |
title={{HuMMan}: Multi-modal 4d human dataset for versatile sensing and modeling},
|
134 |
author={Cai, Zhongang and Ren, Daxuan and Zeng, Ailing and Lin, Zhengyu and Yu, Tao and Wang, Wenjia and Fan,
|
|
|
39 |
- split: test
|
40 |
path: data/test-*
|
41 |
---
|
42 |
+
# Dataset card for "VidDiffBench" benchmark
|
43 |
+
This is the dataset / benchmakark is for [Video Action Differencing](https://openreview.net/forum?id=3bcN6xlO6f) (ICLR 2025). Video Action Differencing is a new task that compares how an action is performed between two videos.
|
44 |
|
45 |
+
This page explains the dataset structure and how to download it. See the paper for details on dataset construction. The code for running evaluation, benchmarking popular LMMs, and implementing our method is at [https://jmhb0.github.io/viddiff](https://jmhb0.github.io/viddiff)
|
46 |
|
47 |
+
```
|
48 |
+
@inproceedings{burgessvideo,
|
49 |
+
title={Video Action Differencing},
|
50 |
+
author={Burgess, James and Wang, Xiaohan and Zhang, Yuhui and Rau, Anita and Lozano, Alejandro and Dunlap, Lisa and Darrell, Trevor and Yeung-Levy, Serena},
|
51 |
+
booktitle={The Thirteenth International Conference on Learning Representations}
|
52 |
+
}
|
53 |
+
```
|
54 |
+
|
55 |
+
# The Video Action Differencing task: closed and open evaluation
|
56 |
+
The general task with a picture.
|
57 |
+
|
58 |
+
Closed mode:
|
59 |
+
(discuss a bit)
|
60 |
+
|
61 |
+
Open mode:
|
62 |
+
|
63 |
+
# Dataset structure
|
64 |
+
Follow the next section to access the data: we have `dataset` is a HuggingFace dataset and `videos` is a `list. For row `i`: video A is `videos[0][i]`, video B is `videos[1][i]`, and `dataset[i]` is the annotation for the difference between the videos.
|
65 |
+
|
66 |
+
The videos:
|
67 |
+
- `videos[0][i]['video']` and is a numpy array with shape `(nframes,H,W,3)`.
|
68 |
+
- `videos[0][i]['fps_original']` is an int, frames per second.
|
69 |
+
|
70 |
+
The annotations:
|
71 |
+
- `sample_key` a unique key.
|
72 |
+
- `videos` metadata about the videos A and B used by the dataloader.
|
73 |
+
- `action` action key like "fitness_2"
|
74 |
+
- `action_name` a short action name, like "deadlift"
|
75 |
+
- `action_description` a longer action description, like "a single free weight deadlift without any weight"
|
76 |
+
- `source_dataset` the source dataset for the videos (but not annotation), e.g. 'humman' [here](https://caizhongang.com/projects/HuMMan/).
|
77 |
+
- `differences_annotated` a dict of
|
78 |
+
|
79 |
+
|
80 |
+
# Getting the data
|
81 |
+
Getting the dataset requires a few steps. We distribute the annotations, but since we don't own the videos, you'll have to download them elsewhere.
|
82 |
+
|
83 |
+
**Get the annotations**
|
84 |
|
85 |
First, get the annotations from the hub like this:
|
86 |
```
|
|
|
89 |
dataset = load_dataset(repo_name)
|
90 |
```
|
91 |
|
92 |
+
**Get the videos**
|
93 |
+
|
94 |
+
We get videos from prior works (which should be cited if you use the benchmark - see the end of this doc).
|
95 |
+
The source dataset is in the dataset column `source_dataset`.
|
96 |
|
97 |
+
First, download some `.py` files from this repo into your local `data/` file.
|
|
|
98 |
```
|
99 |
GIT_LFS_SKIP_SMUDGE=1 git clone [email protected]:datasets/viddiff/VidDiffBench data/
|
100 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
|
102 |
A few datasets let us redistribute videos, so you can download them from this HF repo like this:
|
103 |
```
|
104 |
python data/download_data.py
|
105 |
```
|
106 |
|
107 |
+
If you ONLY need the 'easy' split, you can stop here. The videos includes the source datasets [Humann](https://caizhongang.com/projects/HuMMan/) (and 'easy' only draws from this data) and [JIGSAWS](https://cirl.lcsr.jhu.edu/research/hmm/datasets/jigsaws_release/).
|
108 |
|
109 |
+
For 'medium' and 'hard' splits, you'll need to download these other datasets from the EgoExo4D and FineDiving. Here's how to do that:
|
110 |
|
111 |
*Download EgoExo4d videos*
|
112 |
|
|
|
120 |
|
121 |
*Download FineDiving videos*
|
122 |
|
123 |
+
These are needed for 'medium' split. Follow the instructions in [the repo](https://github.com/xujinglin/FineDiving) to request access (it takes at least a day), download the whole thing, and set up a link to it:
|
124 |
+
```
|
125 |
+
ln -s <path_to_fitnediving> data/src_FineDiving
|
126 |
+
```
|
127 |
+
|
128 |
|
129 |
+
**Making the final dataset with videos**
|
130 |
|
|
|
131 |
Install these packages:
|
132 |
```
|
133 |
pip install numpy Pillow datasets decord lmdb tqdm huggingface_hub
|
|
|
138 |
dataset = load_dataset(splits=['easy'], subset_mode="0")
|
139 |
videos = load_all_videos(dataset, cache=True, cache_dir="cache/cache_data")
|
140 |
```
|
141 |
+
Here, `videos[0]` and `videos[1]` are lists of length `len(dataset)`. Each sample has two videos to compare, so for sample `i`, video A is `videos[0][i]` and video B is `videos[0][i]`. For video A, the video itself is `videos[0][i]['video']` and is a numpy array with shape `(nframes,3,H,W)`; the fps is in `videos[0][i]['fps_origi']`.
|
142 |
|
143 |
By passing the argument `cache=True` to `load_all_videos`, we create a cache directory at `cache/cache_data/`, and save copies of the videos using numpy memmap (total directory size for the whole dataset is 55Gb). Loading the videos and caching will take a few minutes per split (faster for the 'easy' split), and about 25mins for the whole dataset. But on subsequent runs, it should be fast - a few seconds for the whole dataset.
|
144 |
|
|
|
156 |
|
157 |
|
158 |
## Citation
|
159 |
+
Below is the citation for our paper, and the original source datasets:
|
|
|
|
|
160 |
```
|
161 |
+
@inproceedings{burgessvideo,
|
162 |
+
title={Video Action Differencing},
|
163 |
+
author={Burgess, James and Wang, Xiaohan and Zhang, Yuhui and Rau, Anita and Lozano, Alejandro and Dunlap, Lisa and Darrell, Trevor and Yeung-Levy, Serena},
|
164 |
+
booktitle={The Thirteenth International Conference on Learning Representations}
|
165 |
+
}
|
166 |
|
167 |
|
|
|
|
|
168 |
@inproceedings{cai2022humman,
|
169 |
title={{HuMMan}: Multi-modal 4d human dataset for versatile sensing and modeling},
|
170 |
author={Cai, Zhongang and Ren, Daxuan and Zeng, Ailing and Lin, Zhengyu and Yu, Tao and Wang, Wenjia and Fan,
|