Datasets:

Languages:
English
Size:
n<1K
ArXiv:
License:
Riiiickkk commited on
Commit
7b2a2f6
Β·
verified Β·
1 Parent(s): 7cd1204

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - Video
7
+ - Image-to-Video
8
+ - Text
9
+ size_categories:
10
+ - n<1K
11
+
12
+ ---
13
+
14
+ <a href="" target="_blank">
15
+ <img alt="arXiv" src="https://img.shields.io/badge/arXiv-pisa--experiments-red?logo=arxiv" height="20" /></a>
16
+ <a href="https://vision-x-nyu.github.io/pisa-experiments.github.io/" target="_blank">
17
+ <img alt="Website" src="https://img.shields.io/badge/🌎_Website-pisa--experiments-blue.svg" height="20" /></a>
18
+ <a href="https://github.com/vision-x-nyu/pisa-experiments" target="_blank" style="display: inline-block; margin-right: 10px;">
19
+ <img alt="GitHub Code" src="https://img.shields.io/badge/Code-pisa--experiments-white?&logo=github&logoColor=white" /></a>
20
+
21
+ # Pisa Experiments
22
+
23
+ This repository contains the PisaBench, training data, model checkpoints, introduced in [PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop](https://arxiv.org/pdf/).
24
+
25
+ ## PisaBench
26
+
27
+ ### Real World Videos
28
+
29
+ We curate a dataset comprising 361 videos demonstrating the dropping task.Each video begins with an object suspended by an invisible wire in the first frame. We cut the video clips to begin as soon as the wire is released and record the videos in slow-motion at 120 frames per second (fps) with cellphone cameras mounted on tripods to eliminate camera motion.
30
+
31
+ We save each video in the following fomat:
32
+
33
+ ```
34
+ β”œβ”€β”€ 00000.jpg
35
+ β”œβ”€β”€ 00001.jpg
36
+ ...
37
+ β”œβ”€β”€ movie.mp4
38
+ └── clip_info.json
39
+ ```
40
+
41
+ * `clip_info.json` is a json file that contains positive/negative point annotations and text descriptions for each video.
42
+
43
+ Real world videos can be found at: `pisabench/real.zip`.
44
+
45
+ ### Simulated Test Videos
46
+
47
+ Since our post-training process uses a dataset of simulated videos, we also create a simulation test-set of 60 videos for understanding sim2real transfer. We create two splits of 30 videos each: one featuring objects and backgrounds seen during training, and the other featuring unseen objects and backgrounds.
48
+
49
+ We save each video in the following format:
50
+
51
+ ```bash
52
+ β”œβ”€β”€ rbga_00000.jpg
53
+ β”œβ”€β”€ rbga_00001.jpg
54
+ ...
55
+ β”œβ”€β”€ movie.gif
56
+ β”œβ”€β”€ mask.npz
57
+ β”œβ”€β”€ clip_info.json
58
+ ```
59
+
60
+ - `mask.npz` is segmentation masks for all objects with shape `[V, N, H, W]`, where `V` is the number of video frames, `N` is the number of objects, `H` is the height, and `W` is the `width`.
61
+ - `clip_info.json` is a json file that contains annotations and text descriptions for each video.
62
+
63
+ Simulated test videos can be found at: `pisabench/sim.zip`.
64
+
65
+ ## Training Data
66
+
67
+ We use Google's [Kubric](https://github.com/google-research/kubric) for generating simulated physics videos. Kubric combines [PyBullet](https://pybullet.org/wordpress/) and [Blender](https://www.blender.org/) for handling simulation and rendering seamlessly in a unified library.
68
+
69
+ We use the [Google Scanned Objects](https://research.google/blog/scanned-objects-by-google-research-a-dataset-of-3d-scanned-common-household-items/) (GSO) dataset which is already supported in Kubric. The GSO dataset consists of ~1000 high quality 3D objects that come from scans of a variety of everyday objects.
70
+
71
+ Training data can be found at:
72
+
73
+ * Physics Supervised Fine-Tuning (PSFT): `training_data/psft.zip`.
74
+ * Object Reward Optimization (ORO): `training_data/oro.zip`.
75
+
76
+ ## Checkpoints
77
+
78
+ Our approach for post-training is inspired by the two-stage pipeline consisting of supervised fine-tuning followed by reward modeling commonly used in LLMs.
79
+
80
+ Checkpoints can be found at:
81
+
82
+ * Open-Sora + PSFT (base): `/checkpoints/base`.
83
+ * base + ORO (Seg): `/checkpoints/oro_seg`.
84
+ * base + ORO (Flow): `/checkpoints/oro_flow`.
85
+ * base + ORO (Depth): `/checkpoints/oro_depth`.
86
+
87
+ ## Download Dataset
88
+
89
+ PisaBench can be downloaded using the following code:
90
+
91
+ ```python
92
+ from huggingface_hub import snapshot_download
93
+
94
+ dataset_path = 'PATH' # The local directory to save downloaded dataset
95
+ snapshot_download("nyu-visionx/pisa-experiments", local_dir=dataset_path, repo_type='dataset')
96
+ ```
97
+
98
+ ## Citation
99
+
100
+ ``` bibtex
101
+
102
+ ```