LiuhanChen commited on
Commit
a71d323
·
verified ·
1 Parent(s): 97c92c0

Add files using upload-large-folder tool

Browse files
=0.33.0 ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ usage: accelerate <command> [<args>]
2
+
3
+ positional arguments:
4
+ {config,estimate-memory,env,launch,tpu-config,test}
5
+ accelerate command helpers
6
+
7
+ optional arguments:
8
+ -h, --help show this help message and exit
=0.5.1 ADDED
File without changes
=1.37.0 ADDED
File without changes
=2.4.0 ADDED
File without changes
=4.43.3 ADDED
File without changes
causalvideovae/eval/RAFT/LICENSE ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ BSD 3-Clause License
2
+
3
+ Copyright (c) 2020, princeton-vl
4
+ All rights reserved.
5
+
6
+ Redistribution and use in source and binary forms, with or without
7
+ modification, are permitted provided that the following conditions are met:
8
+
9
+ * Redistributions of source code must retain the above copyright notice, this
10
+ list of conditions and the following disclaimer.
11
+
12
+ * Redistributions in binary form must reproduce the above copyright notice,
13
+ this list of conditions and the following disclaimer in the documentation
14
+ and/or other materials provided with the distribution.
15
+
16
+ * Neither the name of the copyright holder nor the names of its
17
+ contributors may be used to endorse or promote products derived from
18
+ this software without specific prior written permission.
19
+
20
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
causalvideovae/eval/RAFT/README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RAFT
2
+ This repository contains the source code for our paper:
3
+
4
+ [RAFT: Recurrent All Pairs Field Transforms for Optical Flow](https://arxiv.org/pdf/2003.12039.pdf)<br/>
5
+ ECCV 2020 <br/>
6
+ Zachary Teed and Jia Deng<br/>
7
+
8
+ <img src="RAFT.png">
9
+
10
+ ## Requirements
11
+ The code has been tested with PyTorch 1.6 and Cuda 10.1.
12
+ ```Shell
13
+ conda create --name raft
14
+ conda activate raft
15
+ conda install pytorch=1.6.0 torchvision=0.7.0 cudatoolkit=10.1 matplotlib tensorboard scipy opencv -c pytorch
16
+ ```
17
+
18
+ ## Demos
19
+ Pretrained models can be downloaded by running
20
+ ```Shell
21
+ ./download_models.sh
22
+ ```
23
+ or downloaded from [google drive](https://drive.google.com/drive/folders/1sWDsfuZ3Up38EUQt7-JDTT1HcGHuJgvT?usp=sharing)
24
+
25
+ You can demo a trained model on a sequence of frames
26
+ ```Shell
27
+ python demo.py --model=models/raft-things.pth --path=demo-frames
28
+ ```
29
+
30
+ ## Required Data
31
+ To evaluate/train RAFT, you will need to download the required datasets.
32
+ * [FlyingChairs](https://lmb.informatik.uni-freiburg.de/resources/datasets/FlyingChairs.en.html#flyingchairs)
33
+ * [FlyingThings3D](https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html)
34
+ * [Sintel](http://sintel.is.tue.mpg.de/)
35
+ * [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow)
36
+ * [HD1K](http://hci-benchmark.iwr.uni-heidelberg.de/) (optional)
37
+
38
+
39
+ By default `datasets.py` will search for the datasets in these locations. You can create symbolic links to wherever the datasets were downloaded in the `datasets` folder
40
+
41
+ ```Shell
42
+ ├── datasets
43
+ ├── Sintel
44
+ ├── test
45
+ ├── training
46
+ ├── KITTI
47
+ ├── testing
48
+ ├── training
49
+ ├── devkit
50
+ ├── FlyingChairs_release
51
+ ├── data
52
+ ├── FlyingThings3D
53
+ ├── frames_cleanpass
54
+ ├── frames_finalpass
55
+ ├── optical_flow
56
+ ```
57
+
58
+ ## Evaluation
59
+ You can evaluate a trained model using `evaluate.py`
60
+ ```Shell
61
+ python evaluate.py --model=models/raft-things.pth --dataset=sintel --mixed_precision
62
+ ```
63
+
64
+ ## Training
65
+ We used the following training schedule in our paper (2 GPUs). Training logs will be written to the `runs` which can be visualized using tensorboard
66
+ ```Shell
67
+ ./train_standard.sh
68
+ ```
69
+
70
+ If you have a RTX GPU, training can be accelerated using mixed precision. You can expect similiar results in this setting (1 GPU)
71
+ ```Shell
72
+ ./train_mixed.sh
73
+ ```
74
+
75
+ ## (Optional) Efficent Implementation
76
+ You can optionally use our alternate (efficent) implementation by compiling the provided cuda extension
77
+ ```Shell
78
+ cd alt_cuda_corr && python setup.py install && cd ..
79
+ ```
80
+ and running `demo.py` and `evaluate.py` with the `--alternate_corr` flag Note, this implementation is somewhat slower than all-pairs, but uses significantly less GPU memory during the forward pass.
causalvideovae/eval/RAFT/alt_cuda_corr/correlation_kernel.cu ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #include <torch/extension.h>
2
+ #include <cuda.h>
3
+ #include <cuda_runtime.h>
4
+ #include <vector>
5
+
6
+
7
+ #define BLOCK_H 4
8
+ #define BLOCK_W 8
9
+ #define BLOCK_HW BLOCK_H * BLOCK_W
10
+ #define CHANNEL_STRIDE 32
11
+
12
+
13
+ __forceinline__ __device__
14
+ bool within_bounds(int h, int w, int H, int W) {
15
+ return h >= 0 && h < H && w >= 0 && w < W;
16
+ }
17
+
18
+ template <typename scalar_t>
19
+ __global__ void corr_forward_kernel(
20
+ const torch::PackedTensorAccessor32<scalar_t,4,torch::RestrictPtrTraits> fmap1,
21
+ const torch::PackedTensorAccessor32<scalar_t,4,torch::RestrictPtrTraits> fmap2,
22
+ const torch::PackedTensorAccessor32<scalar_t,5,torch::RestrictPtrTraits> coords,
23
+ torch::PackedTensorAccessor32<scalar_t,5,torch::RestrictPtrTraits> corr,
24
+ int r)
25
+ {
26
+ const int b = blockIdx.x;
27
+ const int h0 = blockIdx.y * blockDim.x;
28
+ const int w0 = blockIdx.z * blockDim.y;
29
+ const int tid = threadIdx.x * blockDim.y + threadIdx.y;
30
+
31
+ const int H1 = fmap1.size(1);
32
+ const int W1 = fmap1.size(2);
33
+ const int H2 = fmap2.size(1);
34
+ const int W2 = fmap2.size(2);
35
+ const int N = coords.size(1);
36
+ const int C = fmap1.size(3);
37
+
38
+ __shared__ scalar_t f1[CHANNEL_STRIDE][BLOCK_HW+1];
39
+ __shared__ scalar_t f2[CHANNEL_STRIDE][BLOCK_HW+1];
40
+ __shared__ scalar_t x2s[BLOCK_HW];
41
+ __shared__ scalar_t y2s[BLOCK_HW];
42
+
43
+ for (int c=0; c<C; c+=CHANNEL_STRIDE) {
44
+ for (int k=0; k<BLOCK_HW; k+=BLOCK_HW/CHANNEL_STRIDE) {
45
+ int k1 = k + tid / CHANNEL_STRIDE;
46
+ int h1 = h0 + k1 / BLOCK_W;
47
+ int w1 = w0 + k1 % BLOCK_W;
48
+ int c1 = tid % CHANNEL_STRIDE;
49
+
50
+ auto fptr = fmap1[b][h1][w1];
51
+ if (within_bounds(h1, w1, H1, W1))
52
+ f1[c1][k1] = fptr[c+c1];
53
+ else
54
+ f1[c1][k1] = 0.0;
55
+ }
56
+
57
+ __syncthreads();
58
+
59
+ for (int n=0; n<N; n++) {
60
+ int h1 = h0 + threadIdx.x;
61
+ int w1 = w0 + threadIdx.y;
62
+ if (within_bounds(h1, w1, H1, W1)) {
63
+ x2s[tid] = coords[b][n][h1][w1][0];
64
+ y2s[tid] = coords[b][n][h1][w1][1];
65
+ }
66
+
67
+ scalar_t dx = x2s[tid] - floor(x2s[tid]);
68
+ scalar_t dy = y2s[tid] - floor(y2s[tid]);
69
+
70
+ int rd = 2*r + 1;
71
+ for (int iy=0; iy<rd+1; iy++) {
72
+ for (int ix=0; ix<rd+1; ix++) {
73
+ for (int k=0; k<BLOCK_HW; k+=BLOCK_HW/CHANNEL_STRIDE) {
74
+ int k1 = k + tid / CHANNEL_STRIDE;
75
+ int h2 = static_cast<int>(floor(y2s[k1]))-r+iy;
76
+ int w2 = static_cast<int>(floor(x2s[k1]))-r+ix;
77
+ int c2 = tid % CHANNEL_STRIDE;
78
+
79
+ auto fptr = fmap2[b][h2][w2];
80
+ if (within_bounds(h2, w2, H2, W2))
81
+ f2[c2][k1] = fptr[c+c2];
82
+ else
83
+ f2[c2][k1] = 0.0;
84
+ }
85
+
86
+ __syncthreads();
87
+
88
+ scalar_t s = 0.0;
89
+ for (int k=0; k<CHANNEL_STRIDE; k++)
90
+ s += f1[k][tid] * f2[k][tid];
91
+
92
+ int ix_nw = H1*W1*((iy-1) + rd*(ix-1));
93
+ int ix_ne = H1*W1*((iy-1) + rd*ix);
94
+ int ix_sw = H1*W1*(iy + rd*(ix-1));
95
+ int ix_se = H1*W1*(iy + rd*ix);
96
+
97
+ scalar_t nw = s * (dy) * (dx);
98
+ scalar_t ne = s * (dy) * (1-dx);
99
+ scalar_t sw = s * (1-dy) * (dx);
100
+ scalar_t se = s * (1-dy) * (1-dx);
101
+
102
+ scalar_t* corr_ptr = &corr[b][n][0][h1][w1];
103
+
104
+ if (iy > 0 && ix > 0 && within_bounds(h1, w1, H1, W1))
105
+ *(corr_ptr + ix_nw) += nw;
106
+
107
+ if (iy > 0 && ix < rd && within_bounds(h1, w1, H1, W1))
108
+ *(corr_ptr + ix_ne) += ne;
109
+
110
+ if (iy < rd && ix > 0 && within_bounds(h1, w1, H1, W1))
111
+ *(corr_ptr + ix_sw) += sw;
112
+
113
+ if (iy < rd && ix < rd && within_bounds(h1, w1, H1, W1))
114
+ *(corr_ptr + ix_se) += se;
115
+ }
116
+ }
117
+ }
118
+ }
119
+ }
120
+
121
+
122
+ template <typename scalar_t>
123
+ __global__ void corr_backward_kernel(
124
+ const torch::PackedTensorAccessor32<scalar_t,4,torch::RestrictPtrTraits> fmap1,
125
+ const torch::PackedTensorAccessor32<scalar_t,4,torch::RestrictPtrTraits> fmap2,
126
+ const torch::PackedTensorAccessor32<scalar_t,5,torch::RestrictPtrTraits> coords,
127
+ const torch::PackedTensorAccessor32<scalar_t,5,torch::RestrictPtrTraits> corr_grad,
128
+ torch::PackedTensorAccessor32<scalar_t,4,torch::RestrictPtrTraits> fmap1_grad,
129
+ torch::PackedTensorAccessor32<scalar_t,4,torch::RestrictPtrTraits> fmap2_grad,
130
+ torch::PackedTensorAccessor32<scalar_t,5,torch::RestrictPtrTraits> coords_grad,
131
+ int r)
132
+ {
133
+
134
+ const int b = blockIdx.x;
135
+ const int h0 = blockIdx.y * blockDim.x;
136
+ const int w0 = blockIdx.z * blockDim.y;
137
+ const int tid = threadIdx.x * blockDim.y + threadIdx.y;
138
+
139
+ const int H1 = fmap1.size(1);
140
+ const int W1 = fmap1.size(2);
141
+ const int H2 = fmap2.size(1);
142
+ const int W2 = fmap2.size(2);
143
+ const int N = coords.size(1);
144
+ const int C = fmap1.size(3);
145
+
146
+ __shared__ scalar_t f1[CHANNEL_STRIDE][BLOCK_HW+1];
147
+ __shared__ scalar_t f2[CHANNEL_STRIDE][BLOCK_HW+1];
148
+
149
+ __shared__ scalar_t f1_grad[CHANNEL_STRIDE][BLOCK_HW+1];
150
+ __shared__ scalar_t f2_grad[CHANNEL_STRIDE][BLOCK_HW+1];
151
+
152
+ __shared__ scalar_t x2s[BLOCK_HW];
153
+ __shared__ scalar_t y2s[BLOCK_HW];
154
+
155
+ for (int c=0; c<C; c+=CHANNEL_STRIDE) {
156
+
157
+ for (int k=0; k<BLOCK_HW; k+=BLOCK_HW/CHANNEL_STRIDE) {
158
+ int k1 = k + tid / CHANNEL_STRIDE;
159
+ int h1 = h0 + k1 / BLOCK_W;
160
+ int w1 = w0 + k1 % BLOCK_W;
161
+ int c1 = tid % CHANNEL_STRIDE;
162
+
163
+ auto fptr = fmap1[b][h1][w1];
164
+ if (within_bounds(h1, w1, H1, W1))
165
+ f1[c1][k1] = fptr[c+c1];
166
+ else
167
+ f1[c1][k1] = 0.0;
168
+
169
+ f1_grad[c1][k1] = 0.0;
170
+ }
171
+
172
+ __syncthreads();
173
+
174
+ int h1 = h0 + threadIdx.x;
175
+ int w1 = w0 + threadIdx.y;
176
+
177
+ for (int n=0; n<N; n++) {
178
+ x2s[tid] = coords[b][n][h1][w1][0];
179
+ y2s[tid] = coords[b][n][h1][w1][1];
180
+
181
+ scalar_t dx = x2s[tid] - floor(x2s[tid]);
182
+ scalar_t dy = y2s[tid] - floor(y2s[tid]);
183
+
184
+ int rd = 2*r + 1;
185
+ for (int iy=0; iy<rd+1; iy++) {
186
+ for (int ix=0; ix<rd+1; ix++) {
187
+ for (int k=0; k<BLOCK_HW; k+=BLOCK_HW/CHANNEL_STRIDE) {
188
+ int k1 = k + tid / CHANNEL_STRIDE;
189
+ int h2 = static_cast<int>(floor(y2s[k1]))-r+iy;
190
+ int w2 = static_cast<int>(floor(x2s[k1]))-r+ix;
191
+ int c2 = tid % CHANNEL_STRIDE;
192
+
193
+ auto fptr = fmap2[b][h2][w2];
194
+ if (within_bounds(h2, w2, H2, W2))
195
+ f2[c2][k1] = fptr[c+c2];
196
+ else
197
+ f2[c2][k1] = 0.0;
198
+
199
+ f2_grad[c2][k1] = 0.0;
200
+ }
201
+
202
+ __syncthreads();
203
+
204
+ const scalar_t* grad_ptr = &corr_grad[b][n][0][h1][w1];
205
+ scalar_t g = 0.0;
206
+
207
+ int ix_nw = H1*W1*((iy-1) + rd*(ix-1));
208
+ int ix_ne = H1*W1*((iy-1) + rd*ix);
209
+ int ix_sw = H1*W1*(iy + rd*(ix-1));
210
+ int ix_se = H1*W1*(iy + rd*ix);
211
+
212
+ if (iy > 0 && ix > 0 && within_bounds(h1, w1, H1, W1))
213
+ g += *(grad_ptr + ix_nw) * dy * dx;
214
+
215
+ if (iy > 0 && ix < rd && within_bounds(h1, w1, H1, W1))
216
+ g += *(grad_ptr + ix_ne) * dy * (1-dx);
217
+
218
+ if (iy < rd && ix > 0 && within_bounds(h1, w1, H1, W1))
219
+ g += *(grad_ptr + ix_sw) * (1-dy) * dx;
220
+
221
+ if (iy < rd && ix < rd && within_bounds(h1, w1, H1, W1))
222
+ g += *(grad_ptr + ix_se) * (1-dy) * (1-dx);
223
+
224
+ for (int k=0; k<CHANNEL_STRIDE; k++) {
225
+ f1_grad[k][tid] += g * f2[k][tid];
226
+ f2_grad[k][tid] += g * f1[k][tid];
227
+ }
228
+
229
+ for (int k=0; k<BLOCK_HW; k+=BLOCK_HW/CHANNEL_STRIDE) {
230
+ int k1 = k + tid / CHANNEL_STRIDE;
231
+ int h2 = static_cast<int>(floor(y2s[k1]))-r+iy;
232
+ int w2 = static_cast<int>(floor(x2s[k1]))-r+ix;
233
+ int c2 = tid % CHANNEL_STRIDE;
234
+
235
+ scalar_t* fptr = &fmap2_grad[b][h2][w2][0];
236
+ if (within_bounds(h2, w2, H2, W2))
237
+ atomicAdd(fptr+c+c2, f2_grad[c2][k1]);
238
+ }
239
+ }
240
+ }
241
+ }
242
+ __syncthreads();
243
+
244
+
245
+ for (int k=0; k<BLOCK_HW; k+=BLOCK_HW/CHANNEL_STRIDE) {
246
+ int k1 = k + tid / CHANNEL_STRIDE;
247
+ int h1 = h0 + k1 / BLOCK_W;
248
+ int w1 = w0 + k1 % BLOCK_W;
249
+ int c1 = tid % CHANNEL_STRIDE;
250
+
251
+ scalar_t* fptr = &fmap1_grad[b][h1][w1][0];
252
+ if (within_bounds(h1, w1, H1, W1))
253
+ fptr[c+c1] += f1_grad[c1][k1];
254
+ }
255
+ }
256
+ }
257
+
258
+
259
+
260
+ std::vector<torch::Tensor> corr_cuda_forward(
261
+ torch::Tensor fmap1,
262
+ torch::Tensor fmap2,
263
+ torch::Tensor coords,
264
+ int radius)
265
+ {
266
+ const auto B = coords.size(0);
267
+ const auto N = coords.size(1);
268
+ const auto H = coords.size(2);
269
+ const auto W = coords.size(3);
270
+
271
+ const auto rd = 2 * radius + 1;
272
+ auto opts = fmap1.options();
273
+ auto corr = torch::zeros({B, N, rd*rd, H, W}, opts);
274
+
275
+ const dim3 blocks(B, (H+BLOCK_H-1)/BLOCK_H, (W+BLOCK_W-1)/BLOCK_W);
276
+ const dim3 threads(BLOCK_H, BLOCK_W);
277
+
278
+ corr_forward_kernel<float><<<blocks, threads>>>(
279
+ fmap1.packed_accessor32<float,4,torch::RestrictPtrTraits>(),
280
+ fmap2.packed_accessor32<float,4,torch::RestrictPtrTraits>(),
281
+ coords.packed_accessor32<float,5,torch::RestrictPtrTraits>(),
282
+ corr.packed_accessor32<float,5,torch::RestrictPtrTraits>(),
283
+ radius);
284
+
285
+ return {corr};
286
+ }
287
+
288
+ std::vector<torch::Tensor> corr_cuda_backward(
289
+ torch::Tensor fmap1,
290
+ torch::Tensor fmap2,
291
+ torch::Tensor coords,
292
+ torch::Tensor corr_grad,
293
+ int radius)
294
+ {
295
+ const auto B = coords.size(0);
296
+ const auto N = coords.size(1);
297
+
298
+ const auto H1 = fmap1.size(1);
299
+ const auto W1 = fmap1.size(2);
300
+ const auto H2 = fmap2.size(1);
301
+ const auto W2 = fmap2.size(2);
302
+ const auto C = fmap1.size(3);
303
+
304
+ auto opts = fmap1.options();
305
+ auto fmap1_grad = torch::zeros({B, H1, W1, C}, opts);
306
+ auto fmap2_grad = torch::zeros({B, H2, W2, C}, opts);
307
+ auto coords_grad = torch::zeros({B, N, H1, W1, 2}, opts);
308
+
309
+ const dim3 blocks(B, (H1+BLOCK_H-1)/BLOCK_H, (W1+BLOCK_W-1)/BLOCK_W);
310
+ const dim3 threads(BLOCK_H, BLOCK_W);
311
+
312
+
313
+ corr_backward_kernel<float><<<blocks, threads>>>(
314
+ fmap1.packed_accessor32<float,4,torch::RestrictPtrTraits>(),
315
+ fmap2.packed_accessor32<float,4,torch::RestrictPtrTraits>(),
316
+ coords.packed_accessor32<float,5,torch::RestrictPtrTraits>(),
317
+ corr_grad.packed_accessor32<float,5,torch::RestrictPtrTraits>(),
318
+ fmap1_grad.packed_accessor32<float,4,torch::RestrictPtrTraits>(),
319
+ fmap2_grad.packed_accessor32<float,4,torch::RestrictPtrTraits>(),
320
+ coords_grad.packed_accessor32<float,5,torch::RestrictPtrTraits>(),
321
+ radius);
322
+
323
+ return {fmap1_grad, fmap2_grad, coords_grad};
324
+ }
causalvideovae/eval/RAFT/core/__init__.py ADDED
File without changes
causalvideovae/eval/RAFT/core/datasets.py ADDED
@@ -0,0 +1,235 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Data loading based on https://github.com/NVIDIA/flownet2-pytorch
2
+
3
+ import numpy as np
4
+ import torch
5
+ import torch.utils.data as data
6
+ import torch.nn.functional as F
7
+
8
+ import os
9
+ import math
10
+ import random
11
+ from glob import glob
12
+ import os.path as osp
13
+
14
+ from .utils import frame_utils
15
+ from .utils.augmentor import FlowAugmentor, SparseFlowAugmentor
16
+
17
+
18
+ class FlowDataset(data.Dataset):
19
+ def __init__(self, aug_params=None, sparse=False):
20
+ self.augmentor = None
21
+ self.sparse = sparse
22
+ if aug_params is not None:
23
+ if sparse:
24
+ self.augmentor = SparseFlowAugmentor(**aug_params)
25
+ else:
26
+ self.augmentor = FlowAugmentor(**aug_params)
27
+
28
+ self.is_test = False
29
+ self.init_seed = False
30
+ self.flow_list = []
31
+ self.image_list = []
32
+ self.extra_info = []
33
+
34
+ def __getitem__(self, index):
35
+
36
+ if self.is_test:
37
+ img1 = frame_utils.read_gen(self.image_list[index][0])
38
+ img2 = frame_utils.read_gen(self.image_list[index][1])
39
+ img1 = np.array(img1).astype(np.uint8)[..., :3]
40
+ img2 = np.array(img2).astype(np.uint8)[..., :3]
41
+ img1 = torch.from_numpy(img1).permute(2, 0, 1).float()
42
+ img2 = torch.from_numpy(img2).permute(2, 0, 1).float()
43
+ return img1, img2, self.extra_info[index]
44
+
45
+ if not self.init_seed:
46
+ worker_info = torch.utils.data.get_worker_info()
47
+ if worker_info is not None:
48
+ torch.manual_seed(worker_info.id)
49
+ np.random.seed(worker_info.id)
50
+ random.seed(worker_info.id)
51
+ self.init_seed = True
52
+
53
+ index = index % len(self.image_list)
54
+ valid = None
55
+ if self.sparse:
56
+ flow, valid = frame_utils.readFlowKITTI(self.flow_list[index])
57
+ else:
58
+ flow = frame_utils.read_gen(self.flow_list[index])
59
+
60
+ img1 = frame_utils.read_gen(self.image_list[index][0])
61
+ img2 = frame_utils.read_gen(self.image_list[index][1])
62
+
63
+ flow = np.array(flow).astype(np.float32)
64
+ img1 = np.array(img1).astype(np.uint8)
65
+ img2 = np.array(img2).astype(np.uint8)
66
+
67
+ # grayscale images
68
+ if len(img1.shape) == 2:
69
+ img1 = np.tile(img1[...,None], (1, 1, 3))
70
+ img2 = np.tile(img2[...,None], (1, 1, 3))
71
+ else:
72
+ img1 = img1[..., :3]
73
+ img2 = img2[..., :3]
74
+
75
+ if self.augmentor is not None:
76
+ if self.sparse:
77
+ img1, img2, flow, valid = self.augmentor(img1, img2, flow, valid)
78
+ else:
79
+ img1, img2, flow = self.augmentor(img1, img2, flow)
80
+
81
+ img1 = torch.from_numpy(img1).permute(2, 0, 1).float()
82
+ img2 = torch.from_numpy(img2).permute(2, 0, 1).float()
83
+ flow = torch.from_numpy(flow).permute(2, 0, 1).float()
84
+
85
+ if valid is not None:
86
+ valid = torch.from_numpy(valid)
87
+ else:
88
+ valid = (flow[0].abs() < 1000) & (flow[1].abs() < 1000)
89
+
90
+ return img1, img2, flow, valid.float()
91
+
92
+
93
+ def __rmul__(self, v):
94
+ self.flow_list = v * self.flow_list
95
+ self.image_list = v * self.image_list
96
+ return self
97
+
98
+ def __len__(self):
99
+ return len(self.image_list)
100
+
101
+
102
+ class MpiSintel(FlowDataset):
103
+ def __init__(self, aug_params=None, split='training', root='datasets/Sintel', dstype='clean'):
104
+ super(MpiSintel, self).__init__(aug_params)
105
+ flow_root = osp.join(root, split, 'flow')
106
+ image_root = osp.join(root, split, dstype)
107
+
108
+ if split == 'test':
109
+ self.is_test = True
110
+
111
+ for scene in os.listdir(image_root):
112
+ image_list = sorted(glob(osp.join(image_root, scene, '*.png')))
113
+ for i in range(len(image_list)-1):
114
+ self.image_list += [ [image_list[i], image_list[i+1]] ]
115
+ self.extra_info += [ (scene, i) ] # scene and frame_id
116
+
117
+ if split != 'test':
118
+ self.flow_list += sorted(glob(osp.join(flow_root, scene, '*.flo')))
119
+
120
+
121
+ class FlyingChairs(FlowDataset):
122
+ def __init__(self, aug_params=None, split='train', root='datasets/FlyingChairs_release/data'):
123
+ super(FlyingChairs, self).__init__(aug_params)
124
+
125
+ images = sorted(glob(osp.join(root, '*.ppm')))
126
+ flows = sorted(glob(osp.join(root, '*.flo')))
127
+ assert (len(images)//2 == len(flows))
128
+
129
+ split_list = np.loadtxt('chairs_split.txt', dtype=np.int32)
130
+ for i in range(len(flows)):
131
+ xid = split_list[i]
132
+ if (split=='training' and xid==1) or (split=='validation' and xid==2):
133
+ self.flow_list += [ flows[i] ]
134
+ self.image_list += [ [images[2*i], images[2*i+1]] ]
135
+
136
+
137
+ class FlyingThings3D(FlowDataset):
138
+ def __init__(self, aug_params=None, root='datasets/FlyingThings3D', dstype='frames_cleanpass'):
139
+ super(FlyingThings3D, self).__init__(aug_params)
140
+
141
+ for cam in ['left']:
142
+ for direction in ['into_future', 'into_past']:
143
+ image_dirs = sorted(glob(osp.join(root, dstype, 'TRAIN/*/*')))
144
+ image_dirs = sorted([osp.join(f, cam) for f in image_dirs])
145
+
146
+ flow_dirs = sorted(glob(osp.join(root, 'optical_flow/TRAIN/*/*')))
147
+ flow_dirs = sorted([osp.join(f, direction, cam) for f in flow_dirs])
148
+
149
+ for idir, fdir in zip(image_dirs, flow_dirs):
150
+ images = sorted(glob(osp.join(idir, '*.png')) )
151
+ flows = sorted(glob(osp.join(fdir, '*.pfm')) )
152
+ for i in range(len(flows)-1):
153
+ if direction == 'into_future':
154
+ self.image_list += [ [images[i], images[i+1]] ]
155
+ self.flow_list += [ flows[i] ]
156
+ elif direction == 'into_past':
157
+ self.image_list += [ [images[i+1], images[i]] ]
158
+ self.flow_list += [ flows[i+1] ]
159
+
160
+
161
+ class KITTI(FlowDataset):
162
+ def __init__(self, aug_params=None, split='training', root='datasets/KITTI'):
163
+ super(KITTI, self).__init__(aug_params, sparse=True)
164
+ if split == 'testing':
165
+ self.is_test = True
166
+
167
+ root = osp.join(root, split)
168
+ images1 = sorted(glob(osp.join(root, 'image_2/*_10.png')))
169
+ images2 = sorted(glob(osp.join(root, 'image_2/*_11.png')))
170
+
171
+ for img1, img2 in zip(images1, images2):
172
+ frame_id = img1.split('/')[-1]
173
+ self.extra_info += [ [frame_id] ]
174
+ self.image_list += [ [img1, img2] ]
175
+
176
+ if split == 'training':
177
+ self.flow_list = sorted(glob(osp.join(root, 'flow_occ/*_10.png')))
178
+
179
+
180
+ class HD1K(FlowDataset):
181
+ def __init__(self, aug_params=None, root='datasets/HD1k'):
182
+ super(HD1K, self).__init__(aug_params, sparse=True)
183
+
184
+ seq_ix = 0
185
+ while 1:
186
+ flows = sorted(glob(os.path.join(root, 'hd1k_flow_gt', 'flow_occ/%06d_*.png' % seq_ix)))
187
+ images = sorted(glob(os.path.join(root, 'hd1k_input', 'image_2/%06d_*.png' % seq_ix)))
188
+
189
+ if len(flows) == 0:
190
+ break
191
+
192
+ for i in range(len(flows)-1):
193
+ self.flow_list += [flows[i]]
194
+ self.image_list += [ [images[i], images[i+1]] ]
195
+
196
+ seq_ix += 1
197
+
198
+
199
+ def fetch_dataloader(args, TRAIN_DS='C+T+K+S+H'):
200
+ """ Create the data loader for the corresponding trainign set """
201
+
202
+ if args.stage == 'chairs':
203
+ aug_params = {'crop_size': args.image_size, 'min_scale': -0.1, 'max_scale': 1.0, 'do_flip': True}
204
+ train_dataset = FlyingChairs(aug_params, split='training')
205
+
206
+ elif args.stage == 'things':
207
+ aug_params = {'crop_size': args.image_size, 'min_scale': -0.4, 'max_scale': 0.8, 'do_flip': True}
208
+ clean_dataset = FlyingThings3D(aug_params, dstype='frames_cleanpass')
209
+ final_dataset = FlyingThings3D(aug_params, dstype='frames_finalpass')
210
+ train_dataset = clean_dataset + final_dataset
211
+
212
+ elif args.stage == 'sintel':
213
+ aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.6, 'do_flip': True}
214
+ things = FlyingThings3D(aug_params, dstype='frames_cleanpass')
215
+ sintel_clean = MpiSintel(aug_params, split='training', dstype='clean')
216
+ sintel_final = MpiSintel(aug_params, split='training', dstype='final')
217
+
218
+ if TRAIN_DS == 'C+T+K+S+H':
219
+ kitti = KITTI({'crop_size': args.image_size, 'min_scale': -0.3, 'max_scale': 0.5, 'do_flip': True})
220
+ hd1k = HD1K({'crop_size': args.image_size, 'min_scale': -0.5, 'max_scale': 0.2, 'do_flip': True})
221
+ train_dataset = 100*sintel_clean + 100*sintel_final + 200*kitti + 5*hd1k + things
222
+
223
+ elif TRAIN_DS == 'C+T+K/S':
224
+ train_dataset = 100*sintel_clean + 100*sintel_final + things
225
+
226
+ elif args.stage == 'kitti':
227
+ aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.4, 'do_flip': False}
228
+ train_dataset = KITTI(aug_params, split='training')
229
+
230
+ train_loader = data.DataLoader(train_dataset, batch_size=args.batch_size,
231
+ pin_memory=False, shuffle=True, num_workers=4, drop_last=True)
232
+
233
+ print('Training with %d image pairs' % len(train_dataset))
234
+ return train_loader
235
+
causalvideovae/eval/RAFT/core/extractor.py ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import torch.nn.functional as F
4
+
5
+
6
+ class ResidualBlock(nn.Module):
7
+ def __init__(self, in_planes, planes, norm_fn='group', stride=1):
8
+ super(ResidualBlock, self).__init__()
9
+
10
+ self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, padding=1, stride=stride)
11
+ self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1)
12
+ self.relu = nn.ReLU(inplace=True)
13
+
14
+ num_groups = planes // 8
15
+
16
+ if norm_fn == 'group':
17
+ self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
18
+ self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
19
+ if not stride == 1:
20
+ self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
21
+
22
+ elif norm_fn == 'batch':
23
+ self.norm1 = nn.BatchNorm2d(planes)
24
+ self.norm2 = nn.BatchNorm2d(planes)
25
+ if not stride == 1:
26
+ self.norm3 = nn.BatchNorm2d(planes)
27
+
28
+ elif norm_fn == 'instance':
29
+ self.norm1 = nn.InstanceNorm2d(planes)
30
+ self.norm2 = nn.InstanceNorm2d(planes)
31
+ if not stride == 1:
32
+ self.norm3 = nn.InstanceNorm2d(planes)
33
+
34
+ elif norm_fn == 'none':
35
+ self.norm1 = nn.Sequential()
36
+ self.norm2 = nn.Sequential()
37
+ if not stride == 1:
38
+ self.norm3 = nn.Sequential()
39
+
40
+ if stride == 1:
41
+ self.downsample = None
42
+
43
+ else:
44
+ self.downsample = nn.Sequential(
45
+ nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm3)
46
+
47
+
48
+ def forward(self, x):
49
+ y = x
50
+ y = self.relu(self.norm1(self.conv1(y)))
51
+ y = self.relu(self.norm2(self.conv2(y)))
52
+
53
+ if self.downsample is not None:
54
+ x = self.downsample(x)
55
+
56
+ return self.relu(x+y)
57
+
58
+
59
+
60
+ class BottleneckBlock(nn.Module):
61
+ def __init__(self, in_planes, planes, norm_fn='group', stride=1):
62
+ super(BottleneckBlock, self).__init__()
63
+
64
+ self.conv1 = nn.Conv2d(in_planes, planes//4, kernel_size=1, padding=0)
65
+ self.conv2 = nn.Conv2d(planes//4, planes//4, kernel_size=3, padding=1, stride=stride)
66
+ self.conv3 = nn.Conv2d(planes//4, planes, kernel_size=1, padding=0)
67
+ self.relu = nn.ReLU(inplace=True)
68
+
69
+ num_groups = planes // 8
70
+
71
+ if norm_fn == 'group':
72
+ self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4)
73
+ self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4)
74
+ self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
75
+ if not stride == 1:
76
+ self.norm4 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
77
+
78
+ elif norm_fn == 'batch':
79
+ self.norm1 = nn.BatchNorm2d(planes//4)
80
+ self.norm2 = nn.BatchNorm2d(planes//4)
81
+ self.norm3 = nn.BatchNorm2d(planes)
82
+ if not stride == 1:
83
+ self.norm4 = nn.BatchNorm2d(planes)
84
+
85
+ elif norm_fn == 'instance':
86
+ self.norm1 = nn.InstanceNorm2d(planes//4)
87
+ self.norm2 = nn.InstanceNorm2d(planes//4)
88
+ self.norm3 = nn.InstanceNorm2d(planes)
89
+ if not stride == 1:
90
+ self.norm4 = nn.InstanceNorm2d(planes)
91
+
92
+ elif norm_fn == 'none':
93
+ self.norm1 = nn.Sequential()
94
+ self.norm2 = nn.Sequential()
95
+ self.norm3 = nn.Sequential()
96
+ if not stride == 1:
97
+ self.norm4 = nn.Sequential()
98
+
99
+ if stride == 1:
100
+ self.downsample = None
101
+
102
+ else:
103
+ self.downsample = nn.Sequential(
104
+ nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm4)
105
+
106
+
107
+ def forward(self, x):
108
+ y = x
109
+ y = self.relu(self.norm1(self.conv1(y)))
110
+ y = self.relu(self.norm2(self.conv2(y)))
111
+ y = self.relu(self.norm3(self.conv3(y)))
112
+
113
+ if self.downsample is not None:
114
+ x = self.downsample(x)
115
+
116
+ return self.relu(x+y)
117
+
118
+ class BasicEncoder(nn.Module):
119
+ def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0):
120
+ super(BasicEncoder, self).__init__()
121
+ self.norm_fn = norm_fn
122
+
123
+ if self.norm_fn == 'group':
124
+ self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64)
125
+
126
+ elif self.norm_fn == 'batch':
127
+ self.norm1 = nn.BatchNorm2d(64)
128
+
129
+ elif self.norm_fn == 'instance':
130
+ self.norm1 = nn.InstanceNorm2d(64)
131
+
132
+ elif self.norm_fn == 'none':
133
+ self.norm1 = nn.Sequential()
134
+
135
+ self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3)
136
+ self.relu1 = nn.ReLU(inplace=True)
137
+
138
+ self.in_planes = 64
139
+ self.layer1 = self._make_layer(64, stride=1)
140
+ self.layer2 = self._make_layer(96, stride=2)
141
+ self.layer3 = self._make_layer(128, stride=2)
142
+
143
+ # output convolution
144
+ self.conv2 = nn.Conv2d(128, output_dim, kernel_size=1)
145
+
146
+ self.dropout = None
147
+ if dropout > 0:
148
+ self.dropout = nn.Dropout2d(p=dropout)
149
+
150
+ for m in self.modules():
151
+ if isinstance(m, nn.Conv2d):
152
+ nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
153
+ elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)):
154
+ if m.weight is not None:
155
+ nn.init.constant_(m.weight, 1)
156
+ if m.bias is not None:
157
+ nn.init.constant_(m.bias, 0)
158
+
159
+ def _make_layer(self, dim, stride=1):
160
+ layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride)
161
+ layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1)
162
+ layers = (layer1, layer2)
163
+
164
+ self.in_planes = dim
165
+ return nn.Sequential(*layers)
166
+
167
+
168
+ def forward(self, x):
169
+
170
+ # if input is list, combine batch dimension
171
+ is_list = isinstance(x, tuple) or isinstance(x, list)
172
+ if is_list:
173
+ batch_dim = x[0].shape[0]
174
+ x = torch.cat(x, dim=0)
175
+
176
+ x = self.conv1(x)
177
+ x = self.norm1(x)
178
+ x = self.relu1(x)
179
+
180
+ x = self.layer1(x)
181
+ x = self.layer2(x)
182
+ x = self.layer3(x)
183
+
184
+ x = self.conv2(x)
185
+
186
+ if self.training and self.dropout is not None:
187
+ x = self.dropout(x)
188
+
189
+ if is_list:
190
+ x = torch.split(x, [batch_dim, batch_dim], dim=0)
191
+
192
+ return x
193
+
194
+
195
+ class SmallEncoder(nn.Module):
196
+ def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0):
197
+ super(SmallEncoder, self).__init__()
198
+ self.norm_fn = norm_fn
199
+
200
+ if self.norm_fn == 'group':
201
+ self.norm1 = nn.GroupNorm(num_groups=8, num_channels=32)
202
+
203
+ elif self.norm_fn == 'batch':
204
+ self.norm1 = nn.BatchNorm2d(32)
205
+
206
+ elif self.norm_fn == 'instance':
207
+ self.norm1 = nn.InstanceNorm2d(32)
208
+
209
+ elif self.norm_fn == 'none':
210
+ self.norm1 = nn.Sequential()
211
+
212
+ self.conv1 = nn.Conv2d(3, 32, kernel_size=7, stride=2, padding=3)
213
+ self.relu1 = nn.ReLU(inplace=True)
214
+
215
+ self.in_planes = 32
216
+ self.layer1 = self._make_layer(32, stride=1)
217
+ self.layer2 = self._make_layer(64, stride=2)
218
+ self.layer3 = self._make_layer(96, stride=2)
219
+
220
+ self.dropout = None
221
+ if dropout > 0:
222
+ self.dropout = nn.Dropout2d(p=dropout)
223
+
224
+ self.conv2 = nn.Conv2d(96, output_dim, kernel_size=1)
225
+
226
+ for m in self.modules():
227
+ if isinstance(m, nn.Conv2d):
228
+ nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
229
+ elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)):
230
+ if m.weight is not None:
231
+ nn.init.constant_(m.weight, 1)
232
+ if m.bias is not None:
233
+ nn.init.constant_(m.bias, 0)
234
+
235
+ def _make_layer(self, dim, stride=1):
236
+ layer1 = BottleneckBlock(self.in_planes, dim, self.norm_fn, stride=stride)
237
+ layer2 = BottleneckBlock(dim, dim, self.norm_fn, stride=1)
238
+ layers = (layer1, layer2)
239
+
240
+ self.in_planes = dim
241
+ return nn.Sequential(*layers)
242
+
243
+
244
+ def forward(self, x):
245
+
246
+ # if input is list, combine batch dimension
247
+ is_list = isinstance(x, tuple) or isinstance(x, list)
248
+ if is_list:
249
+ batch_dim = x[0].shape[0]
250
+ x = torch.cat(x, dim=0)
251
+
252
+ x = self.conv1(x)
253
+ x = self.norm1(x)
254
+ x = self.relu1(x)
255
+
256
+ x = self.layer1(x)
257
+ x = self.layer2(x)
258
+ x = self.layer3(x)
259
+ x = self.conv2(x)
260
+
261
+ if self.training and self.dropout is not None:
262
+ x = self.dropout(x)
263
+
264
+ if is_list:
265
+ x = torch.split(x, [batch_dim, batch_dim], dim=0)
266
+
267
+ return x
causalvideovae/eval/RAFT/models.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4be6101b271f58ec49866da5cf609fd17e86e9cae2483f70630ef4a295dc66bd
3
+ size 81977417
causalvideovae/eval/cal_ssim.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import torch
3
+ from tqdm import tqdm
4
+ import cv2
5
+
6
+ def ssim(img1, img2):
7
+ C1 = 0.01 ** 2
8
+ C2 = 0.03 ** 2
9
+ img1 = img1.astype(np.float64)
10
+ img2 = img2.astype(np.float64)
11
+ kernel = cv2.getGaussianKernel(11, 1.5)
12
+ window = np.outer(kernel, kernel.transpose())
13
+ mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
14
+ mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
15
+ mu1_sq = mu1 ** 2
16
+ mu2_sq = mu2 ** 2
17
+ mu1_mu2 = mu1 * mu2
18
+ sigma1_sq = cv2.filter2D(img1 ** 2, -1, window)[5:-5, 5:-5] - mu1_sq
19
+ sigma2_sq = cv2.filter2D(img2 ** 2, -1, window)[5:-5, 5:-5] - mu2_sq
20
+ sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
21
+ ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
22
+ (sigma1_sq + sigma2_sq + C2))
23
+ return ssim_map.mean()
24
+
25
+
26
+ def calculate_ssim_function(img1, img2):
27
+ # [0,1]
28
+ # ssim is the only metric extremely sensitive to gray being compared to b/w
29
+ if not img1.shape == img2.shape:
30
+ raise ValueError('Input images must have the same dimensions.')
31
+ if img1.ndim == 2:
32
+ return ssim(img1, img2)
33
+ elif img1.ndim == 3:
34
+ if img1.shape[0] == 3:
35
+ ssims = []
36
+ for i in range(3):
37
+ ssims.append(ssim(img1[i], img2[i]))
38
+ return np.array(ssims).mean()
39
+ elif img1.shape[0] == 1:
40
+ return ssim(np.squeeze(img1), np.squeeze(img2))
41
+ else:
42
+ raise ValueError('Wrong input image dimensions.')
43
+
44
+ def trans(x):
45
+ return x
46
+
47
+ def calculate_ssim(videos1, videos2):
48
+ print("calculate_ssim...")
49
+
50
+ # videos [batch_size, timestamps, channel, h, w]
51
+
52
+ assert videos1.shape == videos2.shape
53
+
54
+ videos1 = trans(videos1)
55
+ videos2 = trans(videos2)
56
+
57
+ ssim_results = []
58
+
59
+ for video_num in tqdm(range(videos1.shape[0])):
60
+ # get a video
61
+ # video [timestamps, channel, h, w]
62
+ video1 = videos1[video_num]
63
+ video2 = videos2[video_num]
64
+
65
+ ssim_results_of_a_video = []
66
+ for clip_timestamp in range(len(video1)):
67
+ # get a img
68
+ # img [timestamps[x], channel, h, w]
69
+ # img [channel, h, w] numpy
70
+
71
+ img1 = video1[clip_timestamp].numpy()
72
+ img2 = video2[clip_timestamp].numpy()
73
+
74
+ # calculate ssim of a video
75
+ ssim_results_of_a_video.append(calculate_ssim_function(img1, img2))
76
+
77
+ ssim_results.append(ssim_results_of_a_video)
78
+
79
+ ssim_results = np.array(ssim_results)
80
+
81
+ ssim = {}
82
+ ssim_std = {}
83
+
84
+ for clip_timestamp in range(len(video1)):
85
+ ssim[clip_timestamp] = np.mean(ssim_results[:,clip_timestamp])
86
+ ssim_std[clip_timestamp] = np.std(ssim_results[:,clip_timestamp])
87
+
88
+ result = {
89
+ "value": ssim,
90
+ "value_std": ssim_std,
91
+ "video_setting": video1.shape,
92
+ "video_setting_name": "time, channel, heigth, width",
93
+ }
94
+
95
+ return result
96
+
97
+ # test code / using example
98
+
99
+ def main():
100
+ NUMBER_OF_VIDEOS = 8
101
+ VIDEO_LENGTH = 50
102
+ CHANNEL = 3
103
+ SIZE = 64
104
+ videos1 = torch.zeros(NUMBER_OF_VIDEOS, VIDEO_LENGTH, CHANNEL, SIZE, SIZE, requires_grad=False)
105
+ videos2 = torch.zeros(NUMBER_OF_VIDEOS, VIDEO_LENGTH, CHANNEL, SIZE, SIZE, requires_grad=False)
106
+ device = torch.device("cuda")
107
+
108
+ import json
109
+ result = calculate_ssim(videos1, videos2)
110
+ print(json.dumps(result, indent=4))
111
+
112
+ if __name__ == "__main__":
113
+ main()
causalvideovae/eval/flolpips/correlation/correlation.py ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ import torch
4
+
5
+ import cupy#add
6
+ import re
7
+
8
+ kernel_Correlation_rearrange = '''
9
+ extern "C" __global__ void kernel_Correlation_rearrange(
10
+ const int n,
11
+ const float* input,
12
+ float* output
13
+ ) {
14
+ int intIndex = (blockIdx.x * blockDim.x) + threadIdx.x;
15
+
16
+ if (intIndex >= n) {
17
+ return;
18
+ }
19
+
20
+ int intSample = blockIdx.z;
21
+ int intChannel = blockIdx.y;
22
+
23
+ float fltValue = input[(((intSample * SIZE_1(input)) + intChannel) * SIZE_2(input) * SIZE_3(input)) + intIndex];
24
+
25
+ __syncthreads();
26
+
27
+ int intPaddedY = (intIndex / SIZE_3(input)) + 4;
28
+ int intPaddedX = (intIndex % SIZE_3(input)) + 4;
29
+ int intRearrange = ((SIZE_3(input) + 8) * intPaddedY) + intPaddedX;
30
+
31
+ output[(((intSample * SIZE_1(output) * SIZE_2(output)) + intRearrange) * SIZE_1(input)) + intChannel] = fltValue;
32
+ }
33
+ '''
34
+
35
+ kernel_Correlation_updateOutput = '''
36
+ extern "C" __global__ void kernel_Correlation_updateOutput(
37
+ const int n,
38
+ const float* rbot0,
39
+ const float* rbot1,
40
+ float* top
41
+ ) {
42
+ extern __shared__ char patch_data_char[];
43
+
44
+ float *patch_data = (float *)patch_data_char;
45
+
46
+ // First (upper left) position of kernel upper-left corner in current center position of neighborhood in image 1
47
+ int x1 = blockIdx.x + 4;
48
+ int y1 = blockIdx.y + 4;
49
+ int item = blockIdx.z;
50
+ int ch_off = threadIdx.x;
51
+
52
+ // Load 3D patch into shared shared memory
53
+ for (int j = 0; j < 1; j++) { // HEIGHT
54
+ for (int i = 0; i < 1; i++) { // WIDTH
55
+ int ji_off = (j + i) * SIZE_3(rbot0);
56
+ for (int ch = ch_off; ch < SIZE_3(rbot0); ch += 32) { // CHANNELS
57
+ int idx1 = ((item * SIZE_1(rbot0) + y1+j) * SIZE_2(rbot0) + x1+i) * SIZE_3(rbot0) + ch;
58
+ int idxPatchData = ji_off + ch;
59
+ patch_data[idxPatchData] = rbot0[idx1];
60
+ }
61
+ }
62
+ }
63
+
64
+ __syncthreads();
65
+
66
+ __shared__ float sum[32];
67
+
68
+ // Compute correlation
69
+ for (int top_channel = 0; top_channel < SIZE_1(top); top_channel++) {
70
+ sum[ch_off] = 0;
71
+
72
+ int s2o = top_channel % 9 - 4;
73
+ int s2p = top_channel / 9 - 4;
74
+
75
+ for (int j = 0; j < 1; j++) { // HEIGHT
76
+ for (int i = 0; i < 1; i++) { // WIDTH
77
+ int ji_off = (j + i) * SIZE_3(rbot0);
78
+ for (int ch = ch_off; ch < SIZE_3(rbot0); ch += 32) { // CHANNELS
79
+ int x2 = x1 + s2o;
80
+ int y2 = y1 + s2p;
81
+
82
+ int idxPatchData = ji_off + ch;
83
+ int idx2 = ((item * SIZE_1(rbot0) + y2+j) * SIZE_2(rbot0) + x2+i) * SIZE_3(rbot0) + ch;
84
+
85
+ sum[ch_off] += patch_data[idxPatchData] * rbot1[idx2];
86
+ }
87
+ }
88
+ }
89
+
90
+ __syncthreads();
91
+
92
+ if (ch_off == 0) {
93
+ float total_sum = 0;
94
+ for (int idx = 0; idx < 32; idx++) {
95
+ total_sum += sum[idx];
96
+ }
97
+ const int sumelems = SIZE_3(rbot0);
98
+ const int index = ((top_channel*SIZE_2(top) + blockIdx.y)*SIZE_3(top))+blockIdx.x;
99
+ top[index + item*SIZE_1(top)*SIZE_2(top)*SIZE_3(top)] = total_sum / (float)sumelems;
100
+ }
101
+ }
102
+ }
103
+ '''
104
+
105
+ kernel_Correlation_updateGradFirst = '''
106
+ #define ROUND_OFF 50000
107
+
108
+ extern "C" __global__ void kernel_Correlation_updateGradFirst(
109
+ const int n,
110
+ const int intSample,
111
+ const float* rbot0,
112
+ const float* rbot1,
113
+ const float* gradOutput,
114
+ float* gradFirst,
115
+ float* gradSecond
116
+ ) { for (int intIndex = (blockIdx.x * blockDim.x) + threadIdx.x; intIndex < n; intIndex += blockDim.x * gridDim.x) {
117
+ int n = intIndex % SIZE_1(gradFirst); // channels
118
+ int l = (intIndex / SIZE_1(gradFirst)) % SIZE_3(gradFirst) + 4; // w-pos
119
+ int m = (intIndex / SIZE_1(gradFirst) / SIZE_3(gradFirst)) % SIZE_2(gradFirst) + 4; // h-pos
120
+
121
+ // round_off is a trick to enable integer division with ceil, even for negative numbers
122
+ // We use a large offset, for the inner part not to become negative.
123
+ const int round_off = ROUND_OFF;
124
+ const int round_off_s1 = round_off;
125
+
126
+ // We add round_off before_s1 the int division and subtract round_off after it, to ensure the formula matches ceil behavior:
127
+ int xmin = (l - 4 + round_off_s1 - 1) + 1 - round_off; // ceil (l - 4)
128
+ int ymin = (m - 4 + round_off_s1 - 1) + 1 - round_off; // ceil (l - 4)
129
+
130
+ // Same here:
131
+ int xmax = (l - 4 + round_off_s1) - round_off; // floor (l - 4)
132
+ int ymax = (m - 4 + round_off_s1) - round_off; // floor (m - 4)
133
+
134
+ float sum = 0;
135
+ if (xmax>=0 && ymax>=0 && (xmin<=SIZE_3(gradOutput)-1) && (ymin<=SIZE_2(gradOutput)-1)) {
136
+ xmin = max(0,xmin);
137
+ xmax = min(SIZE_3(gradOutput)-1,xmax);
138
+
139
+ ymin = max(0,ymin);
140
+ ymax = min(SIZE_2(gradOutput)-1,ymax);
141
+
142
+ for (int p = -4; p <= 4; p++) {
143
+ for (int o = -4; o <= 4; o++) {
144
+ // Get rbot1 data:
145
+ int s2o = o;
146
+ int s2p = p;
147
+ int idxbot1 = ((intSample * SIZE_1(rbot0) + (m+s2p)) * SIZE_2(rbot0) + (l+s2o)) * SIZE_3(rbot0) + n;
148
+ float bot1tmp = rbot1[idxbot1]; // rbot1[l+s2o,m+s2p,n]
149
+
150
+ // Index offset for gradOutput in following loops:
151
+ int op = (p+4) * 9 + (o+4); // index[o,p]
152
+ int idxopoffset = (intSample * SIZE_1(gradOutput) + op);
153
+
154
+ for (int y = ymin; y <= ymax; y++) {
155
+ for (int x = xmin; x <= xmax; x++) {
156
+ int idxgradOutput = (idxopoffset * SIZE_2(gradOutput) + y) * SIZE_3(gradOutput) + x; // gradOutput[x,y,o,p]
157
+ sum += gradOutput[idxgradOutput] * bot1tmp;
158
+ }
159
+ }
160
+ }
161
+ }
162
+ }
163
+ const int sumelems = SIZE_1(gradFirst);
164
+ const int bot0index = ((n * SIZE_2(gradFirst)) + (m-4)) * SIZE_3(gradFirst) + (l-4);
165
+ gradFirst[bot0index + intSample*SIZE_1(gradFirst)*SIZE_2(gradFirst)*SIZE_3(gradFirst)] = sum / (float)sumelems;
166
+ } }
167
+ '''
168
+
169
+ kernel_Correlation_updateGradSecond = '''
170
+ #define ROUND_OFF 50000
171
+
172
+ extern "C" __global__ void kernel_Correlation_updateGradSecond(
173
+ const int n,
174
+ const int intSample,
175
+ const float* rbot0,
176
+ const float* rbot1,
177
+ const float* gradOutput,
178
+ float* gradFirst,
179
+ float* gradSecond
180
+ ) { for (int intIndex = (blockIdx.x * blockDim.x) + threadIdx.x; intIndex < n; intIndex += blockDim.x * gridDim.x) {
181
+ int n = intIndex % SIZE_1(gradSecond); // channels
182
+ int l = (intIndex / SIZE_1(gradSecond)) % SIZE_3(gradSecond) + 4; // w-pos
183
+ int m = (intIndex / SIZE_1(gradSecond) / SIZE_3(gradSecond)) % SIZE_2(gradSecond) + 4; // h-pos
184
+
185
+ // round_off is a trick to enable integer division with ceil, even for negative numbers
186
+ // We use a large offset, for the inner part not to become negative.
187
+ const int round_off = ROUND_OFF;
188
+ const int round_off_s1 = round_off;
189
+
190
+ float sum = 0;
191
+ for (int p = -4; p <= 4; p++) {
192
+ for (int o = -4; o <= 4; o++) {
193
+ int s2o = o;
194
+ int s2p = p;
195
+
196
+ //Get X,Y ranges and clamp
197
+ // We add round_off before_s1 the int division and subtract round_off after it, to ensure the formula matches ceil behavior:
198
+ int xmin = (l - 4 - s2o + round_off_s1 - 1) + 1 - round_off; // ceil (l - 4 - s2o)
199
+ int ymin = (m - 4 - s2p + round_off_s1 - 1) + 1 - round_off; // ceil (l - 4 - s2o)
200
+
201
+ // Same here:
202
+ int xmax = (l - 4 - s2o + round_off_s1) - round_off; // floor (l - 4 - s2o)
203
+ int ymax = (m - 4 - s2p + round_off_s1) - round_off; // floor (m - 4 - s2p)
204
+
205
+ if (xmax>=0 && ymax>=0 && (xmin<=SIZE_3(gradOutput)-1) && (ymin<=SIZE_2(gradOutput)-1)) {
206
+ xmin = max(0,xmin);
207
+ xmax = min(SIZE_3(gradOutput)-1,xmax);
208
+
209
+ ymin = max(0,ymin);
210
+ ymax = min(SIZE_2(gradOutput)-1,ymax);
211
+
212
+ // Get rbot0 data:
213
+ int idxbot0 = ((intSample * SIZE_1(rbot0) + (m-s2p)) * SIZE_2(rbot0) + (l-s2o)) * SIZE_3(rbot0) + n;
214
+ float bot0tmp = rbot0[idxbot0]; // rbot1[l+s2o,m+s2p,n]
215
+
216
+ // Index offset for gradOutput in following loops:
217
+ int op = (p+4) * 9 + (o+4); // index[o,p]
218
+ int idxopoffset = (intSample * SIZE_1(gradOutput) + op);
219
+
220
+ for (int y = ymin; y <= ymax; y++) {
221
+ for (int x = xmin; x <= xmax; x++) {
222
+ int idxgradOutput = (idxopoffset * SIZE_2(gradOutput) + y) * SIZE_3(gradOutput) + x; // gradOutput[x,y,o,p]
223
+ sum += gradOutput[idxgradOutput] * bot0tmp;
224
+ }
225
+ }
226
+ }
227
+ }
228
+ }
229
+ const int sumelems = SIZE_1(gradSecond);
230
+ const int bot1index = ((n * SIZE_2(gradSecond)) + (m-4)) * SIZE_3(gradSecond) + (l-4);
231
+ gradSecond[bot1index + intSample*SIZE_1(gradSecond)*SIZE_2(gradSecond)*SIZE_3(gradSecond)] = sum / (float)sumelems;
232
+ } }
233
+ '''
234
+
235
+ def cupy_kernel(strFunction, objVariables):
236
+ strKernel = globals()[strFunction]
237
+
238
+ while True:
239
+ objMatch = re.search('(SIZE_)([0-4])(\()([^\)]*)(\))', strKernel)
240
+
241
+ if objMatch is None:
242
+ break
243
+ # end
244
+
245
+ intArg = int(objMatch.group(2))
246
+
247
+ strTensor = objMatch.group(4)
248
+ intSizes = objVariables[strTensor].size()
249
+
250
+ strKernel = strKernel.replace(objMatch.group(), str(intSizes[intArg]))
251
+ # end
252
+
253
+ while True:
254
+ objMatch = re.search('(VALUE_)([0-4])(\()([^\)]+)(\))', strKernel)
255
+
256
+ if objMatch is None:
257
+ break
258
+ # end
259
+
260
+ intArgs = int(objMatch.group(2))
261
+ strArgs = objMatch.group(4).split(',')
262
+
263
+ strTensor = strArgs[0]
264
+ intStrides = objVariables[strTensor].stride()
265
+ strIndex = [ '((' + strArgs[intArg + 1].replace('{', '(').replace('}', ')').strip() + ')*' + str(intStrides[intArg]) + ')' for intArg in range(intArgs) ]
266
+
267
+ strKernel = strKernel.replace(objMatch.group(0), strTensor + '[' + str.join('+', strIndex) + ']')
268
+ # end
269
+
270
+ return strKernel
271
+ # end
272
+
273
+ @cupy.memoize(for_each_device=True)
274
+ def cupy_launch(strFunction, strKernel):
275
+ return cupy.RawKernel(strKernel, strFunction)
276
+ # end
277
+
278
+ class _FunctionCorrelation(torch.autograd.Function):
279
+ @staticmethod
280
+ def forward(self, first, second):
281
+ rbot0 = first.new_zeros([ first.shape[0], first.shape[2] + 8, first.shape[3] + 8, first.shape[1] ])
282
+ rbot1 = first.new_zeros([ first.shape[0], first.shape[2] + 8, first.shape[3] + 8, first.shape[1] ])
283
+
284
+ self.save_for_backward(first, second, rbot0, rbot1)
285
+
286
+ first = first.contiguous(); assert(first.is_cuda == True)
287
+ second = second.contiguous(); assert(second.is_cuda == True)
288
+
289
+ output = first.new_zeros([ first.shape[0], 81, first.shape[2], first.shape[3] ])
290
+
291
+ if first.is_cuda == True:
292
+ n = first.shape[2] * first.shape[3]
293
+ cupy_launch('kernel_Correlation_rearrange', cupy_kernel('kernel_Correlation_rearrange', {
294
+ 'input': first,
295
+ 'output': rbot0
296
+ }))(
297
+ grid=tuple([ int((n + 16 - 1) / 16), first.shape[1], first.shape[0] ]),
298
+ block=tuple([ 16, 1, 1 ]),
299
+ args=[ n, first.data_ptr(), rbot0.data_ptr() ]
300
+ )
301
+
302
+ n = second.shape[2] * second.shape[3]
303
+ cupy_launch('kernel_Correlation_rearrange', cupy_kernel('kernel_Correlation_rearrange', {
304
+ 'input': second,
305
+ 'output': rbot1
306
+ }))(
307
+ grid=tuple([ int((n + 16 - 1) / 16), second.shape[1], second.shape[0] ]),
308
+ block=tuple([ 16, 1, 1 ]),
309
+ args=[ n, second.data_ptr(), rbot1.data_ptr() ]
310
+ )
311
+
312
+ n = output.shape[1] * output.shape[2] * output.shape[3]
313
+ cupy_launch('kernel_Correlation_updateOutput', cupy_kernel('kernel_Correlation_updateOutput', {
314
+ 'rbot0': rbot0,
315
+ 'rbot1': rbot1,
316
+ 'top': output
317
+ }))(
318
+ grid=tuple([ output.shape[3], output.shape[2], output.shape[0] ]),
319
+ block=tuple([ 32, 1, 1 ]),
320
+ shared_mem=first.shape[1] * 4,
321
+ args=[ n, rbot0.data_ptr(), rbot1.data_ptr(), output.data_ptr() ]
322
+ )
323
+
324
+ elif first.is_cuda == False:
325
+ raise NotImplementedError()
326
+
327
+ # end
328
+
329
+ return output
330
+ # end
331
+
332
+ @staticmethod
333
+ def backward(self, gradOutput):
334
+ first, second, rbot0, rbot1 = self.saved_tensors
335
+
336
+ gradOutput = gradOutput.contiguous(); assert(gradOutput.is_cuda == True)
337
+
338
+ gradFirst = first.new_zeros([ first.shape[0], first.shape[1], first.shape[2], first.shape[3] ]) if self.needs_input_grad[0] == True else None
339
+ gradSecond = first.new_zeros([ first.shape[0], first.shape[1], first.shape[2], first.shape[3] ]) if self.needs_input_grad[1] == True else None
340
+
341
+ if first.is_cuda == True:
342
+ if gradFirst is not None:
343
+ for intSample in range(first.shape[0]):
344
+ n = first.shape[1] * first.shape[2] * first.shape[3]
345
+ cupy_launch('kernel_Correlation_updateGradFirst', cupy_kernel('kernel_Correlation_updateGradFirst', {
346
+ 'rbot0': rbot0,
347
+ 'rbot1': rbot1,
348
+ 'gradOutput': gradOutput,
349
+ 'gradFirst': gradFirst,
350
+ 'gradSecond': None
351
+ }))(
352
+ grid=tuple([ int((n + 512 - 1) / 512), 1, 1 ]),
353
+ block=tuple([ 512, 1, 1 ]),
354
+ args=[ n, intSample, rbot0.data_ptr(), rbot1.data_ptr(), gradOutput.data_ptr(), gradFirst.data_ptr(), None ]
355
+ )
356
+ # end
357
+ # end
358
+
359
+ if gradSecond is not None:
360
+ for intSample in range(first.shape[0]):
361
+ n = first.shape[1] * first.shape[2] * first.shape[3]
362
+ cupy_launch('kernel_Correlation_updateGradSecond', cupy_kernel('kernel_Correlation_updateGradSecond', {
363
+ 'rbot0': rbot0,
364
+ 'rbot1': rbot1,
365
+ 'gradOutput': gradOutput,
366
+ 'gradFirst': None,
367
+ 'gradSecond': gradSecond
368
+ }))(
369
+ grid=tuple([ int((n + 512 - 1) / 512), 1, 1 ]),
370
+ block=tuple([ 512, 1, 1 ]),
371
+ args=[ n, intSample, rbot0.data_ptr(), rbot1.data_ptr(), gradOutput.data_ptr(), None, gradSecond.data_ptr() ]
372
+ )
373
+ # end
374
+ # end
375
+
376
+ elif first.is_cuda == False:
377
+ raise NotImplementedError()
378
+
379
+ # end
380
+
381
+ return gradFirst, gradSecond
382
+ # end
383
+ # end
384
+
385
+ def FunctionCorrelation(tenFirst, tenSecond):
386
+ return _FunctionCorrelation.apply(tenFirst, tenSecond)
387
+ # end
388
+
389
+ class ModuleCorrelation(torch.nn.Module):
390
+ def __init__(self):
391
+ super(ModuleCorrelation, self).__init__()
392
+ # end
393
+
394
+ def forward(self, tenFirst, tenSecond):
395
+ return _FunctionCorrelation.apply(tenFirst, tenSecond)
396
+ # end
397
+ # end
causalvideovae/eval/flolpips/pretrained_networks.py ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import namedtuple
2
+ import torch
3
+ from torchvision import models as tv
4
+
5
+ class squeezenet(torch.nn.Module):
6
+ def __init__(self, requires_grad=False, pretrained=True):
7
+ super(squeezenet, self).__init__()
8
+ pretrained_features = tv.squeezenet1_1(pretrained=pretrained).features
9
+ self.slice1 = torch.nn.Sequential()
10
+ self.slice2 = torch.nn.Sequential()
11
+ self.slice3 = torch.nn.Sequential()
12
+ self.slice4 = torch.nn.Sequential()
13
+ self.slice5 = torch.nn.Sequential()
14
+ self.slice6 = torch.nn.Sequential()
15
+ self.slice7 = torch.nn.Sequential()
16
+ self.N_slices = 7
17
+ for x in range(2):
18
+ self.slice1.add_module(str(x), pretrained_features[x])
19
+ for x in range(2,5):
20
+ self.slice2.add_module(str(x), pretrained_features[x])
21
+ for x in range(5, 8):
22
+ self.slice3.add_module(str(x), pretrained_features[x])
23
+ for x in range(8, 10):
24
+ self.slice4.add_module(str(x), pretrained_features[x])
25
+ for x in range(10, 11):
26
+ self.slice5.add_module(str(x), pretrained_features[x])
27
+ for x in range(11, 12):
28
+ self.slice6.add_module(str(x), pretrained_features[x])
29
+ for x in range(12, 13):
30
+ self.slice7.add_module(str(x), pretrained_features[x])
31
+ if not requires_grad:
32
+ for param in self.parameters():
33
+ param.requires_grad = False
34
+
35
+ def forward(self, X):
36
+ h = self.slice1(X)
37
+ h_relu1 = h
38
+ h = self.slice2(h)
39
+ h_relu2 = h
40
+ h = self.slice3(h)
41
+ h_relu3 = h
42
+ h = self.slice4(h)
43
+ h_relu4 = h
44
+ h = self.slice5(h)
45
+ h_relu5 = h
46
+ h = self.slice6(h)
47
+ h_relu6 = h
48
+ h = self.slice7(h)
49
+ h_relu7 = h
50
+ vgg_outputs = namedtuple("SqueezeOutputs", ['relu1','relu2','relu3','relu4','relu5','relu6','relu7'])
51
+ out = vgg_outputs(h_relu1,h_relu2,h_relu3,h_relu4,h_relu5,h_relu6,h_relu7)
52
+
53
+ return out
54
+
55
+
56
+ class alexnet(torch.nn.Module):
57
+ def __init__(self, requires_grad=False, pretrained=True):
58
+ super(alexnet, self).__init__()
59
+ alexnet_pretrained_features = tv.alexnet(pretrained=pretrained).features
60
+ self.slice1 = torch.nn.Sequential()
61
+ self.slice2 = torch.nn.Sequential()
62
+ self.slice3 = torch.nn.Sequential()
63
+ self.slice4 = torch.nn.Sequential()
64
+ self.slice5 = torch.nn.Sequential()
65
+ self.N_slices = 5
66
+ for x in range(2):
67
+ self.slice1.add_module(str(x), alexnet_pretrained_features[x])
68
+ for x in range(2, 5):
69
+ self.slice2.add_module(str(x), alexnet_pretrained_features[x])
70
+ for x in range(5, 8):
71
+ self.slice3.add_module(str(x), alexnet_pretrained_features[x])
72
+ for x in range(8, 10):
73
+ self.slice4.add_module(str(x), alexnet_pretrained_features[x])
74
+ for x in range(10, 12):
75
+ self.slice5.add_module(str(x), alexnet_pretrained_features[x])
76
+ if not requires_grad:
77
+ for param in self.parameters():
78
+ param.requires_grad = False
79
+
80
+ def forward(self, X):
81
+ h = self.slice1(X)
82
+ h_relu1 = h
83
+ h = self.slice2(h)
84
+ h_relu2 = h
85
+ h = self.slice3(h)
86
+ h_relu3 = h
87
+ h = self.slice4(h)
88
+ h_relu4 = h
89
+ h = self.slice5(h)
90
+ h_relu5 = h
91
+ alexnet_outputs = namedtuple("AlexnetOutputs", ['relu1', 'relu2', 'relu3', 'relu4', 'relu5'])
92
+ out = alexnet_outputs(h_relu1, h_relu2, h_relu3, h_relu4, h_relu5)
93
+
94
+ return out
95
+
96
+ class vgg16(torch.nn.Module):
97
+ def __init__(self, requires_grad=False, pretrained=True):
98
+ super(vgg16, self).__init__()
99
+ vgg_pretrained_features = tv.vgg16(pretrained=pretrained).features
100
+ self.slice1 = torch.nn.Sequential()
101
+ self.slice2 = torch.nn.Sequential()
102
+ self.slice3 = torch.nn.Sequential()
103
+ self.slice4 = torch.nn.Sequential()
104
+ self.slice5 = torch.nn.Sequential()
105
+ self.N_slices = 5
106
+ for x in range(4):
107
+ self.slice1.add_module(str(x), vgg_pretrained_features[x])
108
+ for x in range(4, 9):
109
+ self.slice2.add_module(str(x), vgg_pretrained_features[x])
110
+ for x in range(9, 16):
111
+ self.slice3.add_module(str(x), vgg_pretrained_features[x])
112
+ for x in range(16, 23):
113
+ self.slice4.add_module(str(x), vgg_pretrained_features[x])
114
+ for x in range(23, 30):
115
+ self.slice5.add_module(str(x), vgg_pretrained_features[x])
116
+ if not requires_grad:
117
+ for param in self.parameters():
118
+ param.requires_grad = False
119
+
120
+ def forward(self, X):
121
+ h = self.slice1(X)
122
+ h_relu1_2 = h
123
+ h = self.slice2(h)
124
+ h_relu2_2 = h
125
+ h = self.slice3(h)
126
+ h_relu3_3 = h
127
+ h = self.slice4(h)
128
+ h_relu4_3 = h
129
+ h = self.slice5(h)
130
+ h_relu5_3 = h
131
+ vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3'])
132
+ out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3)
133
+
134
+ return out
135
+
136
+
137
+
138
+ class resnet(torch.nn.Module):
139
+ def __init__(self, requires_grad=False, pretrained=True, num=18):
140
+ super(resnet, self).__init__()
141
+ if(num==18):
142
+ self.net = tv.resnet18(pretrained=pretrained)
143
+ elif(num==34):
144
+ self.net = tv.resnet34(pretrained=pretrained)
145
+ elif(num==50):
146
+ self.net = tv.resnet50(pretrained=pretrained)
147
+ elif(num==101):
148
+ self.net = tv.resnet101(pretrained=pretrained)
149
+ elif(num==152):
150
+ self.net = tv.resnet152(pretrained=pretrained)
151
+ self.N_slices = 5
152
+
153
+ self.conv1 = self.net.conv1
154
+ self.bn1 = self.net.bn1
155
+ self.relu = self.net.relu
156
+ self.maxpool = self.net.maxpool
157
+ self.layer1 = self.net.layer1
158
+ self.layer2 = self.net.layer2
159
+ self.layer3 = self.net.layer3
160
+ self.layer4 = self.net.layer4
161
+
162
+ def forward(self, X):
163
+ h = self.conv1(X)
164
+ h = self.bn1(h)
165
+ h = self.relu(h)
166
+ h_relu1 = h
167
+ h = self.maxpool(h)
168
+ h = self.layer1(h)
169
+ h_conv2 = h
170
+ h = self.layer2(h)
171
+ h_conv3 = h
172
+ h = self.layer3(h)
173
+ h_conv4 = h
174
+ h = self.layer4(h)
175
+ h_conv5 = h
176
+
177
+ outputs = namedtuple("Outputs", ['relu1','conv2','conv3','conv4','conv5'])
178
+ out = outputs(h_relu1, h_conv2, h_conv3, h_conv4, h_conv5)
179
+
180
+ return out
causalvideovae/eval/fvd/videogpt/pytorch_i3d.py ADDED
@@ -0,0 +1,322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Original code from https://github.com/piergiaj/pytorch-i3d
2
+ import torch
3
+ import torch.nn as nn
4
+ import torch.nn.functional as F
5
+ import numpy as np
6
+
7
+ class MaxPool3dSamePadding(nn.MaxPool3d):
8
+
9
+ def compute_pad(self, dim, s):
10
+ if s % self.stride[dim] == 0:
11
+ return max(self.kernel_size[dim] - self.stride[dim], 0)
12
+ else:
13
+ return max(self.kernel_size[dim] - (s % self.stride[dim]), 0)
14
+
15
+ def forward(self, x):
16
+ # compute 'same' padding
17
+ (batch, channel, t, h, w) = x.size()
18
+ out_t = np.ceil(float(t) / float(self.stride[0]))
19
+ out_h = np.ceil(float(h) / float(self.stride[1]))
20
+ out_w = np.ceil(float(w) / float(self.stride[2]))
21
+ pad_t = self.compute_pad(0, t)
22
+ pad_h = self.compute_pad(1, h)
23
+ pad_w = self.compute_pad(2, w)
24
+
25
+ pad_t_f = pad_t // 2
26
+ pad_t_b = pad_t - pad_t_f
27
+ pad_h_f = pad_h // 2
28
+ pad_h_b = pad_h - pad_h_f
29
+ pad_w_f = pad_w // 2
30
+ pad_w_b = pad_w - pad_w_f
31
+
32
+ pad = (pad_w_f, pad_w_b, pad_h_f, pad_h_b, pad_t_f, pad_t_b)
33
+ x = F.pad(x, pad)
34
+ return super(MaxPool3dSamePadding, self).forward(x)
35
+
36
+
37
+ class Unit3D(nn.Module):
38
+
39
+ def __init__(self, in_channels,
40
+ output_channels,
41
+ kernel_shape=(1, 1, 1),
42
+ stride=(1, 1, 1),
43
+ padding=0,
44
+ activation_fn=F.relu,
45
+ use_batch_norm=True,
46
+ use_bias=False,
47
+ name='unit_3d'):
48
+
49
+ """Initializes Unit3D module."""
50
+ super(Unit3D, self).__init__()
51
+
52
+ self._output_channels = output_channels
53
+ self._kernel_shape = kernel_shape
54
+ self._stride = stride
55
+ self._use_batch_norm = use_batch_norm
56
+ self._activation_fn = activation_fn
57
+ self._use_bias = use_bias
58
+ self.name = name
59
+ self.padding = padding
60
+
61
+ self.conv3d = nn.Conv3d(in_channels=in_channels,
62
+ out_channels=self._output_channels,
63
+ kernel_size=self._kernel_shape,
64
+ stride=self._stride,
65
+ padding=0, # we always want padding to be 0 here. We will dynamically pad based on input size in forward function
66
+ bias=self._use_bias)
67
+
68
+ if self._use_batch_norm:
69
+ self.bn = nn.BatchNorm3d(self._output_channels, eps=1e-5, momentum=0.001)
70
+
71
+ def compute_pad(self, dim, s):
72
+ if s % self._stride[dim] == 0:
73
+ return max(self._kernel_shape[dim] - self._stride[dim], 0)
74
+ else:
75
+ return max(self._kernel_shape[dim] - (s % self._stride[dim]), 0)
76
+
77
+
78
+ def forward(self, x):
79
+ # compute 'same' padding
80
+ (batch, channel, t, h, w) = x.size()
81
+ out_t = np.ceil(float(t) / float(self._stride[0]))
82
+ out_h = np.ceil(float(h) / float(self._stride[1]))
83
+ out_w = np.ceil(float(w) / float(self._stride[2]))
84
+ pad_t = self.compute_pad(0, t)
85
+ pad_h = self.compute_pad(1, h)
86
+ pad_w = self.compute_pad(2, w)
87
+
88
+ pad_t_f = pad_t // 2
89
+ pad_t_b = pad_t - pad_t_f
90
+ pad_h_f = pad_h // 2
91
+ pad_h_b = pad_h - pad_h_f
92
+ pad_w_f = pad_w // 2
93
+ pad_w_b = pad_w - pad_w_f
94
+
95
+ pad = (pad_w_f, pad_w_b, pad_h_f, pad_h_b, pad_t_f, pad_t_b)
96
+ x = F.pad(x, pad)
97
+
98
+ x = self.conv3d(x)
99
+ if self._use_batch_norm:
100
+ x = self.bn(x)
101
+ if self._activation_fn is not None:
102
+ x = self._activation_fn(x)
103
+ return x
104
+
105
+
106
+
107
+ class InceptionModule(nn.Module):
108
+ def __init__(self, in_channels, out_channels, name):
109
+ super(InceptionModule, self).__init__()
110
+
111
+ self.b0 = Unit3D(in_channels=in_channels, output_channels=out_channels[0], kernel_shape=[1, 1, 1], padding=0,
112
+ name=name+'/Branch_0/Conv3d_0a_1x1')
113
+ self.b1a = Unit3D(in_channels=in_channels, output_channels=out_channels[1], kernel_shape=[1, 1, 1], padding=0,
114
+ name=name+'/Branch_1/Conv3d_0a_1x1')
115
+ self.b1b = Unit3D(in_channels=out_channels[1], output_channels=out_channels[2], kernel_shape=[3, 3, 3],
116
+ name=name+'/Branch_1/Conv3d_0b_3x3')
117
+ self.b2a = Unit3D(in_channels=in_channels, output_channels=out_channels[3], kernel_shape=[1, 1, 1], padding=0,
118
+ name=name+'/Branch_2/Conv3d_0a_1x1')
119
+ self.b2b = Unit3D(in_channels=out_channels[3], output_channels=out_channels[4], kernel_shape=[3, 3, 3],
120
+ name=name+'/Branch_2/Conv3d_0b_3x3')
121
+ self.b3a = MaxPool3dSamePadding(kernel_size=[3, 3, 3],
122
+ stride=(1, 1, 1), padding=0)
123
+ self.b3b = Unit3D(in_channels=in_channels, output_channels=out_channels[5], kernel_shape=[1, 1, 1], padding=0,
124
+ name=name+'/Branch_3/Conv3d_0b_1x1')
125
+ self.name = name
126
+
127
+ def forward(self, x):
128
+ b0 = self.b0(x)
129
+ b1 = self.b1b(self.b1a(x))
130
+ b2 = self.b2b(self.b2a(x))
131
+ b3 = self.b3b(self.b3a(x))
132
+ return torch.cat([b0,b1,b2,b3], dim=1)
133
+
134
+
135
+ class InceptionI3d(nn.Module):
136
+ """Inception-v1 I3D architecture.
137
+ The model is introduced in:
138
+ Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
139
+ Joao Carreira, Andrew Zisserman
140
+ https://arxiv.org/pdf/1705.07750v1.pdf.
141
+ See also the Inception architecture, introduced in:
142
+ Going deeper with convolutions
143
+ Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed,
144
+ Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich.
145
+ http://arxiv.org/pdf/1409.4842v1.pdf.
146
+ """
147
+
148
+ # Endpoints of the model in order. During construction, all the endpoints up
149
+ # to a designated `final_endpoint` are returned in a dictionary as the
150
+ # second return value.
151
+ VALID_ENDPOINTS = (
152
+ 'Conv3d_1a_7x7',
153
+ 'MaxPool3d_2a_3x3',
154
+ 'Conv3d_2b_1x1',
155
+ 'Conv3d_2c_3x3',
156
+ 'MaxPool3d_3a_3x3',
157
+ 'Mixed_3b',
158
+ 'Mixed_3c',
159
+ 'MaxPool3d_4a_3x3',
160
+ 'Mixed_4b',
161
+ 'Mixed_4c',
162
+ 'Mixed_4d',
163
+ 'Mixed_4e',
164
+ 'Mixed_4f',
165
+ 'MaxPool3d_5a_2x2',
166
+ 'Mixed_5b',
167
+ 'Mixed_5c',
168
+ 'Logits',
169
+ 'Predictions',
170
+ )
171
+
172
+ def __init__(self, num_classes=400, spatial_squeeze=True,
173
+ final_endpoint='Logits', name='inception_i3d', in_channels=3, dropout_keep_prob=0.5):
174
+ """Initializes I3D model instance.
175
+ Args:
176
+ num_classes: The number of outputs in the logit layer (default 400, which
177
+ matches the Kinetics dataset).
178
+ spatial_squeeze: Whether to squeeze the spatial dimensions for the logits
179
+ before returning (default True).
180
+ final_endpoint: The model contains many possible endpoints.
181
+ `final_endpoint` specifies the last endpoint for the model to be built
182
+ up to. In addition to the output at `final_endpoint`, all the outputs
183
+ at endpoints up to `final_endpoint` will also be returned, in a
184
+ dictionary. `final_endpoint` must be one of
185
+ InceptionI3d.VALID_ENDPOINTS (default 'Logits').
186
+ name: A string (optional). The name of this module.
187
+ Raises:
188
+ ValueError: if `final_endpoint` is not recognized.
189
+ """
190
+
191
+ if final_endpoint not in self.VALID_ENDPOINTS:
192
+ raise ValueError('Unknown final endpoint %s' % final_endpoint)
193
+
194
+ super(InceptionI3d, self).__init__()
195
+ self._num_classes = num_classes
196
+ self._spatial_squeeze = spatial_squeeze
197
+ self._final_endpoint = final_endpoint
198
+ self.logits = None
199
+
200
+ if self._final_endpoint not in self.VALID_ENDPOINTS:
201
+ raise ValueError('Unknown final endpoint %s' % self._final_endpoint)
202
+
203
+ self.end_points = {}
204
+ end_point = 'Conv3d_1a_7x7'
205
+ self.end_points[end_point] = Unit3D(in_channels=in_channels, output_channels=64, kernel_shape=[7, 7, 7],
206
+ stride=(2, 2, 2), padding=(3,3,3), name=name+end_point)
207
+ if self._final_endpoint == end_point: return
208
+
209
+ end_point = 'MaxPool3d_2a_3x3'
210
+ self.end_points[end_point] = MaxPool3dSamePadding(kernel_size=[1, 3, 3], stride=(1, 2, 2),
211
+ padding=0)
212
+ if self._final_endpoint == end_point: return
213
+
214
+ end_point = 'Conv3d_2b_1x1'
215
+ self.end_points[end_point] = Unit3D(in_channels=64, output_channels=64, kernel_shape=[1, 1, 1], padding=0,
216
+ name=name+end_point)
217
+ if self._final_endpoint == end_point: return
218
+
219
+ end_point = 'Conv3d_2c_3x3'
220
+ self.end_points[end_point] = Unit3D(in_channels=64, output_channels=192, kernel_shape=[3, 3, 3], padding=1,
221
+ name=name+end_point)
222
+ if self._final_endpoint == end_point: return
223
+
224
+ end_point = 'MaxPool3d_3a_3x3'
225
+ self.end_points[end_point] = MaxPool3dSamePadding(kernel_size=[1, 3, 3], stride=(1, 2, 2),
226
+ padding=0)
227
+ if self._final_endpoint == end_point: return
228
+
229
+ end_point = 'Mixed_3b'
230
+ self.end_points[end_point] = InceptionModule(192, [64,96,128,16,32,32], name+end_point)
231
+ if self._final_endpoint == end_point: return
232
+
233
+ end_point = 'Mixed_3c'
234
+ self.end_points[end_point] = InceptionModule(256, [128,128,192,32,96,64], name+end_point)
235
+ if self._final_endpoint == end_point: return
236
+
237
+ end_point = 'MaxPool3d_4a_3x3'
238
+ self.end_points[end_point] = MaxPool3dSamePadding(kernel_size=[3, 3, 3], stride=(2, 2, 2),
239
+ padding=0)
240
+ if self._final_endpoint == end_point: return
241
+
242
+ end_point = 'Mixed_4b'
243
+ self.end_points[end_point] = InceptionModule(128+192+96+64, [192,96,208,16,48,64], name+end_point)
244
+ if self._final_endpoint == end_point: return
245
+
246
+ end_point = 'Mixed_4c'
247
+ self.end_points[end_point] = InceptionModule(192+208+48+64, [160,112,224,24,64,64], name+end_point)
248
+ if self._final_endpoint == end_point: return
249
+
250
+ end_point = 'Mixed_4d'
251
+ self.end_points[end_point] = InceptionModule(160+224+64+64, [128,128,256,24,64,64], name+end_point)
252
+ if self._final_endpoint == end_point: return
253
+
254
+ end_point = 'Mixed_4e'
255
+ self.end_points[end_point] = InceptionModule(128+256+64+64, [112,144,288,32,64,64], name+end_point)
256
+ if self._final_endpoint == end_point: return
257
+
258
+ end_point = 'Mixed_4f'
259
+ self.end_points[end_point] = InceptionModule(112+288+64+64, [256,160,320,32,128,128], name+end_point)
260
+ if self._final_endpoint == end_point: return
261
+
262
+ end_point = 'MaxPool3d_5a_2x2'
263
+ self.end_points[end_point] = MaxPool3dSamePadding(kernel_size=[2, 2, 2], stride=(2, 2, 2),
264
+ padding=0)
265
+ if self._final_endpoint == end_point: return
266
+
267
+ end_point = 'Mixed_5b'
268
+ self.end_points[end_point] = InceptionModule(256+320+128+128, [256,160,320,32,128,128], name+end_point)
269
+ if self._final_endpoint == end_point: return
270
+
271
+ end_point = 'Mixed_5c'
272
+ self.end_points[end_point] = InceptionModule(256+320+128+128, [384,192,384,48,128,128], name+end_point)
273
+ if self._final_endpoint == end_point: return
274
+
275
+ end_point = 'Logits'
276
+ self.avg_pool = nn.AvgPool3d(kernel_size=[2, 7, 7],
277
+ stride=(1, 1, 1))
278
+ self.dropout = nn.Dropout(dropout_keep_prob)
279
+ self.logits = Unit3D(in_channels=384+384+128+128, output_channels=self._num_classes,
280
+ kernel_shape=[1, 1, 1],
281
+ padding=0,
282
+ activation_fn=None,
283
+ use_batch_norm=False,
284
+ use_bias=True,
285
+ name='logits')
286
+
287
+ self.build()
288
+
289
+
290
+ def replace_logits(self, num_classes):
291
+ self._num_classes = num_classes
292
+ self.logits = Unit3D(in_channels=384+384+128+128, output_channels=self._num_classes,
293
+ kernel_shape=[1, 1, 1],
294
+ padding=0,
295
+ activation_fn=None,
296
+ use_batch_norm=False,
297
+ use_bias=True,
298
+ name='logits')
299
+
300
+
301
+ def build(self):
302
+ for k in self.end_points.keys():
303
+ self.add_module(k, self.end_points[k])
304
+
305
+ def forward(self, x):
306
+ for end_point in self.VALID_ENDPOINTS:
307
+ if end_point in self.end_points:
308
+ x = self._modules[end_point](x) # use _modules to work with dataparallel
309
+
310
+ x = self.logits(self.dropout(self.avg_pool(x)))
311
+ if self._spatial_squeeze:
312
+ logits = x.squeeze(3).squeeze(3)
313
+ logits = logits.mean(dim=2)
314
+ # logits is batch X time X classes, which is what we want to work with
315
+ return logits
316
+
317
+
318
+ def extract_features(self, x):
319
+ for end_point in self.VALID_ENDPOINTS:
320
+ if end_point in self.end_points:
321
+ x = self._modules[end_point](x)
322
+ return self.avg_pool(x)
causalvideovae/model/__init__.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ from .causal_vae import (
2
+ CausalVAEModel, CausalVAEModelWrapper
3
+ )
4
+ from .refiner import Refiner
5
+ from .ema_model import EMA
causalvideovae/model/dataset_videobase.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os.path as osp
2
+ import random
3
+ from glob import glob
4
+
5
+ from torchvision import transforms
6
+ import numpy as np
7
+ import torch
8
+ import torch.utils.data as data
9
+ import torch.nn.functional as F
10
+ from torchvision.transforms import Lambda
11
+
12
+ from ..dataset.transform import ToTensorVideo, CenterCropVideo, RandomCropVideo
13
+ from ..utils.dataset_utils import DecordInit
14
+
15
+ def TemporalRandomCrop(total_frames, size):
16
+ """
17
+ Performs a random temporal crop on a video sequence.
18
+
19
+ This function randomly selects a continuous frame sequence of length `size` from a video sequence.
20
+ `total_frames` indicates the total number of frames in the video sequence, and `size` represents the length of the frame sequence to be cropped.
21
+
22
+ Parameters:
23
+ - total_frames (int): The total number of frames in the video sequence.
24
+ - size (int): The length of the frame sequence to be cropped.
25
+
26
+ Returns:
27
+ - (int, int): A tuple containing two integers. The first integer is the starting frame index of the cropped sequence,
28
+ and the second integer is the ending frame index (inclusive) of the cropped sequence.
29
+ """
30
+ rand_end = max(0, total_frames - size - 1)
31
+ begin_index = random.randint(0, rand_end)
32
+ end_index = min(begin_index + size, total_frames)
33
+ return begin_index, end_index
34
+
35
+ def resize(x, resolution):
36
+ height, width = x.shape[-2:]
37
+ resolution = min(2 * resolution, height, width)
38
+ aspect_ratio = width / height
39
+ if width <= height:
40
+ new_width = resolution
41
+ new_height = int(resolution / aspect_ratio)
42
+ else:
43
+ new_height = resolution
44
+ new_width = int(resolution * aspect_ratio)
45
+ resized_x = F.interpolate(x, size=(new_height, new_width), mode='bilinear', align_corners=True, antialias=True)
46
+ return resized_x
47
+
48
+ class VideoDataset(data.Dataset):
49
+ """ Generic dataset for videos files stored in folders
50
+ Returns BCTHW videos in the range [-1, 1] """
51
+ video_exts = ['avi', 'mp4', 'webm']
52
+ def __init__(self, video_folder, sequence_length, image_folder=None, train=True, resolution=64, sample_rate=1, dynamic_sample=True):
53
+
54
+ self.train = train
55
+ self.sequence_length = sequence_length
56
+ self.sample_rate = sample_rate
57
+ self.resolution = resolution
58
+ self.v_decoder = DecordInit()
59
+ self.video_folder = video_folder
60
+ self.dynamic_sample = dynamic_sample
61
+
62
+ self.transform = transforms.Compose([
63
+ ToTensorVideo(),
64
+ # Lambda(lambda x: resize(x, self.resolution)),
65
+ CenterCropVideo(self.resolution),
66
+ Lambda(lambda x: 2.0 * x - 1.0)
67
+ ])
68
+ print('Building datasets...')
69
+ self.samples = self._make_dataset()
70
+
71
+ def _make_dataset(self):
72
+ samples = []
73
+ samples += sum([glob(osp.join(self.video_folder, '**', f'*.{ext}'), recursive=True)
74
+ for ext in self.video_exts], [])
75
+ return samples
76
+
77
+ def __len__(self):
78
+ return len(self.samples)
79
+
80
+ def __getitem__(self, idx):
81
+ video_path = self.samples[idx]
82
+ try:
83
+ video = self.decord_read(video_path)
84
+ video = self.transform(video) # T C H W -> T C H W
85
+ video = video.transpose(0, 1) # T C H W -> C T H W
86
+ return dict(video=video, label="")
87
+ except Exception as e:
88
+ print(f'Error with {e}, {video_path}')
89
+ return self.__getitem__(random.randint(0, self.__len__()-1))
90
+
91
+ def decord_read(self, path):
92
+ decord_vr = self.v_decoder(path)
93
+ total_frames = len(decord_vr)
94
+ # Sampling video frames
95
+ if self.dynamic_sample:
96
+ sample_rate = random.randint(1, self.sample_rate)
97
+ else:
98
+ sample_rate = self.sample_rate
99
+ size = self.sequence_length * sample_rate
100
+ start_frame_ind, end_frame_ind = TemporalRandomCrop(total_frames, size)
101
+ # assert end_frame_ind - start_frame_ind >= self.num_frames
102
+ frame_indice = np.linspace(start_frame_ind, end_frame_ind - 1, self.sequence_length, dtype=int)
103
+
104
+ video_data = decord_vr.get_batch(frame_indice).asnumpy()
105
+ video_data = torch.from_numpy(video_data)
106
+ video_data = video_data.permute(0, 3, 1, 2) # (T, H, W, C) -> (T C H W)
107
+ return video_data
108
+
109
+ class VideoDatasetRefiner(data.Dataset):
110
+ """ Generic dataset for videos files stored in folders
111
+ Returns BCTHW videos in the range [-0.5, 0.5] """
112
+ video_exts = ['avi', 'mp4', 'webm']
113
+ def __init__(self, origin_video_folder, vae_video_folder, sequence_length, image_folder=None, train=True, resolution=64, sample_rate=1, dynamic_sample=True):
114
+
115
+ self.train = train
116
+ self.sequence_length = sequence_length
117
+ self.sample_rate = sample_rate
118
+ self.resolution = resolution
119
+ self.v_decoder = DecordInit()
120
+ self.origin_video_folder = origin_video_folder
121
+ self.vae_video_folder = vae_video_folder
122
+ self.dynamic_sample = dynamic_sample
123
+
124
+ self.transform = transforms.Compose([
125
+ ToTensorVideo(),
126
+ #Lambda(lambda x: resize(x, self.resolution)),
127
+ CenterCropVideo(self.resolution),
128
+ Lambda(lambda x: 2.0 * x - 1.0)
129
+ ])
130
+ print('Building datasets...')
131
+ self.origin_samples = self._make_dataset(self.origin_video_folder)
132
+ self.vae_samples = self._make_dataset(self.vae_video_folder)
133
+
134
+ def _make_dataset(self, dir):
135
+ samples = []
136
+ samples += sum([glob(osp.join(dir, '**', f'*.{ext}'), recursive=True)
137
+ for ext in self.video_exts], [])
138
+ return sorted(samples)
139
+
140
+ def __len__(self):
141
+ return len(self.origin_samples)
142
+
143
+ def __getitem__(self, idx):
144
+ origin_video_path = self.origin_samples[idx]
145
+ vae_video_path = self.vae_samples[idx]
146
+
147
+ try:
148
+ origin_video, vae_video = self.decord_read(origin_video_path, vae_video_path)
149
+ origin_video = self.transform(origin_video) # T C H W -> T C H W
150
+ origin_video = origin_video.transpose(0, 1) # T C H W -> C T H W
151
+ vae_video = self.transform(vae_video) # T C H W -> T C H W
152
+ vae_video = vae_video.transpose(0, 1) # T C H W -> C T H W
153
+ return dict(origin_video=origin_video, vae_video=vae_video)
154
+ except Exception as e:
155
+ print(f'Error with {e}, {origin_video_path}, {vae_video_path}')
156
+ return self.__getitem__(random.randint(0, self.__len__()-1))
157
+
158
+ def decord_read(self, origin_path, vae_path):
159
+ decord_vr_origin = self.v_decoder(origin_path)
160
+ decord_vr_vae = self.v_decoder(vae_path)
161
+ total_frames = len(decord_vr_origin)
162
+ # Sampling video frames
163
+ if self.dynamic_sample:
164
+ sample_rate = random.randint(1, self.sample_rate)
165
+ else:
166
+ sample_rate = self.sample_rate
167
+ size = self.sequence_length * sample_rate
168
+ start_frame_ind, end_frame_ind = TemporalRandomCrop(total_frames, size)
169
+ # assert end_frame_ind - start_frame_ind >= self.num_frames
170
+ frame_indice = np.linspace(start_frame_ind, end_frame_ind - 1, self.sequence_length, dtype=int)
171
+
172
+ origin_video_data = decord_vr_origin.get_batch(frame_indice).asnumpy()
173
+ origin_video_data = torch.from_numpy(origin_video_data)
174
+ origin_video_data = origin_video_data.permute(0, 3, 1, 2) # (T, H, W, C) -> (T C H W)
175
+ vae_video_data = decord_vr_vae.get_batch(frame_indice).asnumpy()
176
+ vae_video_data = torch.from_numpy(vae_video_data)
177
+ vae_video_data = vae_video_data.permute(0, 3, 1, 2) # (T, H, W, C) -> (T C H W)
178
+ return origin_video_data, vae_video_data
causalvideovae/model/ema_model.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ class EMA:
2
+ def __init__(self, model, decay):
3
+ self.model = model
4
+ self.decay = decay
5
+ self.shadow = {}
6
+ self.backup = {}
7
+
8
+ def register(self):
9
+ for name, param in self.model.named_parameters():
10
+ if param.requires_grad:
11
+ self.shadow[name] = param.data.clone()
12
+
13
+ def update(self):
14
+ for name, param in self.model.named_parameters():
15
+ if name in self.shadow:
16
+ new_average = (1.0 - self.decay) * param.data + self.decay * self.shadow[name]
17
+ self.shadow[name] = new_average.clone()
18
+
19
+ def apply_shadow(self):
20
+ for name, param in self.model.named_parameters():
21
+ if name in self.shadow:
22
+ self.backup[name] = param.data
23
+ param.data = self.shadow[name]
24
+
25
+ def restore(self):
26
+ for name, param in self.model.named_parameters():
27
+ if name in self.shadow:
28
+ param.data = self.backup[name]
29
+ self.backup = {}
30
+
31
+
causalvideovae/model/losses/lpips.py ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Stripped version of https://github.com/richzhang/PerceptualSimilarity/tree/master/models"""
2
+
3
+ import torch
4
+ import torch.nn as nn
5
+ from torchvision import models
6
+ from collections import namedtuple
7
+ from ...utils.taming_download import get_ckpt_path
8
+
9
+ class LPIPS(nn.Module):
10
+ # Learned perceptual metric
11
+ def __init__(self, use_dropout=True):
12
+ super().__init__()
13
+ self.scaling_layer = ScalingLayer()
14
+ self.chns = [64, 128, 256, 512, 512] # vg16 features
15
+ self.net = vgg16(pretrained=True, requires_grad=False)
16
+ self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout)
17
+ self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout)
18
+ self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout)
19
+ self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout)
20
+ self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout)
21
+ self.load_from_pretrained()
22
+ for param in self.parameters():
23
+ param.requires_grad = False
24
+
25
+ def load_from_pretrained(self, name="vgg_lpips"):
26
+ ckpt = get_ckpt_path(name, ".cache/lpips")
27
+ self.load_state_dict(torch.load(ckpt, map_location=torch.device("cpu")), strict=False)
28
+ print("loaded pretrained LPIPS loss from {}".format(ckpt))
29
+
30
+ @classmethod
31
+ def from_pretrained(cls, name="vgg_lpips"):
32
+ if name != "vgg_lpips":
33
+ raise NotImplementedError
34
+ model = cls()
35
+ ckpt = get_ckpt_path(name)
36
+ model.load_state_dict(torch.load(ckpt, map_location=torch.device("cpu")), strict=False)
37
+ return model
38
+
39
+ def forward(self, input, target):
40
+ in0_input, in1_input = (self.scaling_layer(input), self.scaling_layer(target))
41
+ outs0, outs1 = self.net(in0_input), self.net(in1_input)
42
+ feats0, feats1, diffs = {}, {}, {}
43
+ lins = [self.lin0, self.lin1, self.lin2, self.lin3, self.lin4]
44
+ for kk in range(len(self.chns)):
45
+ feats0[kk], feats1[kk] = normalize_tensor(outs0[kk]), normalize_tensor(outs1[kk])
46
+ diffs[kk] = (feats0[kk] - feats1[kk]) ** 2
47
+
48
+ res = [spatial_average(lins[kk].model(diffs[kk]), keepdim=True) for kk in range(len(self.chns))]
49
+ val = res[0]
50
+ for l in range(1, len(self.chns)):
51
+ val += res[l]
52
+ return val
53
+
54
+
55
+ class ScalingLayer(nn.Module):
56
+ def __init__(self):
57
+ super(ScalingLayer, self).__init__()
58
+ self.register_buffer('shift', torch.Tensor([-.030, -.088, -.188])[None, :, None, None])
59
+ self.register_buffer('scale', torch.Tensor([.458, .448, .450])[None, :, None, None])
60
+
61
+ def forward(self, inp):
62
+ return (inp - self.shift) / self.scale
63
+
64
+
65
+ class NetLinLayer(nn.Module):
66
+ """ A single linear layer which does a 1x1 conv """
67
+ def __init__(self, chn_in, chn_out=1, use_dropout=False):
68
+ super(NetLinLayer, self).__init__()
69
+ layers = [nn.Dropout(), ] if (use_dropout) else []
70
+ layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False), ]
71
+ self.model = nn.Sequential(*layers)
72
+
73
+
74
+ class vgg16(torch.nn.Module):
75
+ def __init__(self, requires_grad=False, pretrained=True):
76
+ super(vgg16, self).__init__()
77
+ vgg_pretrained_features = models.vgg16(pretrained=pretrained).features
78
+ self.slice1 = torch.nn.Sequential()
79
+ self.slice2 = torch.nn.Sequential()
80
+ self.slice3 = torch.nn.Sequential()
81
+ self.slice4 = torch.nn.Sequential()
82
+ self.slice5 = torch.nn.Sequential()
83
+ self.N_slices = 5
84
+ for x in range(4):
85
+ self.slice1.add_module(str(x), vgg_pretrained_features[x])
86
+ for x in range(4, 9):
87
+ self.slice2.add_module(str(x), vgg_pretrained_features[x])
88
+ for x in range(9, 16):
89
+ self.slice3.add_module(str(x), vgg_pretrained_features[x])
90
+ for x in range(16, 23):
91
+ self.slice4.add_module(str(x), vgg_pretrained_features[x])
92
+ for x in range(23, 30):
93
+ self.slice5.add_module(str(x), vgg_pretrained_features[x])
94
+ if not requires_grad:
95
+ for param in self.parameters():
96
+ param.requires_grad = False
97
+
98
+ def forward(self, X):
99
+ h = self.slice1(X)
100
+ h_relu1_2 = h
101
+ h = self.slice2(h)
102
+ h_relu2_2 = h
103
+ h = self.slice3(h)
104
+ h_relu3_3 = h
105
+ h = self.slice4(h)
106
+ h_relu4_3 = h
107
+ h = self.slice5(h)
108
+ h_relu5_3 = h
109
+ vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3'])
110
+ out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3)
111
+ return out
112
+
113
+
114
+ def normalize_tensor(x,eps=1e-10):
115
+ norm_factor = torch.sqrt(torch.sum(x**2,dim=1,keepdim=True))
116
+ return x/(norm_factor+eps)
117
+
118
+
119
+ def spatial_average(x, keepdim=True):
120
+ return x.mean([2,3],keepdim=keepdim)
causalvideovae/model/modules/block.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ import torch.nn as nn
2
+
3
+ class Block(nn.Module):
4
+ def __init__(self, *args, **kwargs) -> None:
5
+ super().__init__(*args, **kwargs)
causalvideovae/model/utils/video_utils.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import numpy as np
3
+
4
+ def tensor_to_video(x):
5
+ #x = (x * 2 - 1).detach().cpu()
6
+ x = torch.clamp(x, -1, 1).detach().cpu()
7
+ x = (x + 1) / 2
8
+ x = x.permute(1, 0, 2, 3).float().numpy() # c t h w -> t c h w
9
+ x = (255 * x).astype(np.uint8)
10
+ return x
scripts/eval.sh ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # REAL_DATASET_DIR=/remote-home1/dataset/OpenMMLab___Kinetics-400/raw/Kinetics-400/videos_val/
2
+ SAMPLE_RATE=1
3
+ REAL_DATASET_DIR=/storage/clh/video_control/DAVIS_119pairs_test/CogVideoXI/CogVideoXI_25x720x480_50steps_prompt
4
+ NUM_FRAMES=25
5
+ RESOLUTION=256
6
+ SUBSET_SIZE=119
7
+ METRIC=ssim
8
+
9
+ python causalvideovae/eval/eval_common_metric.py \
10
+ --batch_size 1 \
11
+ --real_video_dir ${REAL_DATASET_DIR} \
12
+ --generated_video_dir /storage/clh/video_control/DAVIS_119pairs_test/gt/25x720x480 \
13
+ --device cuda:4 \
14
+ --sample_fps 24 \
15
+ --sample_rate ${SAMPLE_RATE} \
16
+ --num_frames ${NUM_FRAMES} \
17
+ --resolution ${RESOLUTION} \
18
+ --crop_size ${RESOLUTION} \
19
+ --metric ${METRIC} \
scripts/nus_vae_gen_video.sh ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
2
+ export NCCL_DEBUG=INFO
3
+ export NCCL_SOCKET_IFNAME=ibs11
4
+ export NCCL_IB_DISABLE=1
5
+ REAL_DATASET_DIR=/storage/dataset/vae_eval/panda70m
6
+ EXP_NAME=test_train
7
+ SAMPLE_RATE=1
8
+ NUM_FRAMES=33
9
+ RESOLUTION=256
10
+ SUBSET_SIZE=1000
11
+ CKPT=/storage/clh/Open-Sora/OpenSora-VAE-v1.2
12
+ unset https_proxy
13
+ unset http_proxy
14
+
15
+ torchrun \
16
+ --nnodes=1 --nproc_per_node=8 \
17
+ --rdzv_backend=c10d \
18
+ --rdzv_endpoint=localhost:29504 \
19
+ --master_addr=localhost \
20
+ --master_port=29600 \
21
+ scripts/rec_nus_vae.py\
22
+ --batch_size 1 \
23
+ --real_video_dir ${REAL_DATASET_DIR} \
24
+ --generated_video_dir /storage/clh/gen/488dim8 \
25
+ --sample_fps 24 \
26
+ --sample_rate ${SAMPLE_RATE} \
27
+ --num_frames ${NUM_FRAMES} \
28
+ --resolution ${RESOLUTION} \
29
+ --crop_size ${RESOLUTION} \
30
+ --num_workers 8 \
31
+ --ckpt ${CKPT} \
32
+ --config /storage/clh/Causal-Video-VAE/opensora/video.py\
33
+ --output_origin \
34
+ --subset_size 1000 \
35
+
scripts/rec_cv_vae.py ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ import argparse
3
+ import cv2
4
+ from tqdm import tqdm
5
+ import numpy as np
6
+ import numpy.typing as npt
7
+ import torch
8
+ import torch.distributed as dist
9
+ from torch.nn.parallel import DistributedDataParallel as DDP
10
+ from torch.utils.data import DataLoader, DistributedSampler, Subset
11
+ from decord import VideoReader, cpu
12
+ from torch.nn import functional as F
13
+ from pytorchvideo.transforms import ShortSideScale
14
+ from torchvision.transforms import Lambda, Compose
15
+ from torchvision.transforms._transforms_video import CenterCropVideo
16
+ import sys
17
+ from torch.utils.data import Dataset, DataLoader, Subset
18
+ import os
19
+ import glob
20
+ sys.path.append(".")
21
+ import torch.nn as nn
22
+ import yaml
23
+ from omegaconf import OmegaConf
24
+ from einops import rearrange
25
+ from CV_VAE.models.modeling_vae import CVVAEModel
26
+
27
+ def ddp_setup():
28
+ dist.init_process_group(backend="nccl")
29
+ torch.cuda.set_device(int(os.environ["LOCAL_RANK"]))
30
+
31
+ def array_to_video(
32
+ image_array: npt.NDArray, fps: float = 30.0, output_file: str = "output_video.mp4"
33
+ ) -> None:
34
+ height, width, channels = image_array[0].shape
35
+ fourcc = cv2.VideoWriter_fourcc(*"mp4v")
36
+ video_writer = cv2.VideoWriter(output_file, fourcc, float(fps), (width, height))
37
+
38
+ for image in image_array:
39
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
40
+ video_writer.write(image_rgb)
41
+
42
+ video_writer.release()
43
+
44
+
45
+ def custom_to_video(
46
+ x: torch.Tensor, fps: float = 2.0, output_file: str = "output_video.mp4"
47
+ ) -> None:
48
+ x = x.detach().cpu()
49
+ x = torch.clamp(x, -1, 1)
50
+ x = (x + 1) / 2
51
+ x = x.permute(1, 2, 3, 0).float().numpy()
52
+ x = (255 * x).astype(np.uint8)
53
+ array_to_video(x, fps=fps, output_file=output_file)
54
+ return
55
+
56
+
57
+ def read_video(video_path: str, num_frames: int, sample_rate: int) -> torch.Tensor:
58
+ decord_vr = VideoReader(video_path, ctx=cpu(0), num_threads=8)
59
+ total_frames = len(decord_vr)
60
+ sample_frames_len = sample_rate * num_frames
61
+
62
+ if total_frames > sample_frames_len:
63
+ s = 0
64
+ e = s + sample_frames_len
65
+ num_frames = num_frames
66
+ else:
67
+ s = 0
68
+ e = total_frames
69
+ num_frames = int(total_frames / sample_frames_len * num_frames)
70
+ print(
71
+ f"sample_frames_len {sample_frames_len}, only can sample {num_frames * sample_rate}",
72
+ video_path,
73
+ total_frames,
74
+ )
75
+
76
+ frame_id_list = np.linspace(s, e - 1, num_frames, dtype=int)
77
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
78
+ video_data = torch.from_numpy(video_data)
79
+ video_data = video_data.permute(3, 0, 1, 2) # (T, H, W, C) -> (C, T, H, W)
80
+ return video_data
81
+
82
+
83
+ class RealVideoDataset(Dataset):
84
+ video_exts = ['avi', 'mp4', 'webm']
85
+
86
+ def __init__(
87
+ self,
88
+ real_video_dir,
89
+ num_frames,
90
+ sample_rate=1,
91
+ crop_size=None,
92
+ resolution=128,
93
+ ) -> None:
94
+ super().__init__()
95
+ self.real_video_files = self._combine_without_prefix(real_video_dir)
96
+ self.num_frames = num_frames
97
+ self.sample_rate = sample_rate
98
+ self.crop_size = crop_size
99
+ self.short_size = resolution
100
+
101
+ def __len__(self):
102
+ return len(self.real_video_files)
103
+
104
+ def __getitem__(self, index):
105
+ try:
106
+ if index >= len(self):
107
+ raise IndexError
108
+ real_video_file = self.real_video_files[index]
109
+ real_video_tensor = self._load_video(real_video_file)
110
+ video_name = os.path.basename(real_video_file)
111
+ except:
112
+ if index >= len(self):
113
+ raise IndexError
114
+ real_video_file = self.real_video_files[random.randint(1,index-1)]
115
+ real_video_tensor = self._load_video(real_video_file)
116
+ video_name = os.path.basename(real_video_file)
117
+ return {'video': real_video_tensor, 'file_name': video_name }
118
+
119
+ def _load_video(self, video_path):
120
+ num_frames = self.num_frames
121
+ sample_rate = self.sample_rate
122
+ decord_vr = VideoReader(video_path, ctx=cpu(0))
123
+ total_frames = len(decord_vr)
124
+ sample_frames_len = sample_rate * num_frames
125
+ s = 0
126
+ e = s + sample_frames_len
127
+ num_frames = num_frames
128
+ """
129
+ if total_frames > sample_frames_len:
130
+ s = 0
131
+ e = s + sample_frames_len
132
+ num_frames = num_frames
133
+
134
+ else:
135
+ s = 0
136
+ e = total_frames
137
+ num_frames = int(total_frames / sample_frames_len * num_frames)
138
+ print(
139
+ f"sample_frames_len {sample_frames_len}, only can sample {num_frames * sample_rate}",
140
+ video_path,
141
+ total_frames,
142
+ )
143
+ """
144
+ frame_id_list = np.linspace(s, e - 1, num_frames, dtype=int)
145
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
146
+ video_data = torch.from_numpy(video_data)
147
+ video_data = video_data.permute(3, 0, 1, 2)
148
+ return _preprocess(
149
+ video_data, short_size=self.short_size, crop_size=self.crop_size
150
+ )
151
+
152
+ def _combine_without_prefix(self, folder_path):
153
+ samples = []
154
+ samples += sum([glob.glob(os.path.join(folder_path, '**', f'*.{ext}'), recursive=True)
155
+ for ext in self.video_exts], [])
156
+ samples.sort()
157
+ return samples
158
+
159
+ def resize(x, resolution):
160
+ height, width = x.shape[-2:]
161
+ aspect_ratio = width / height
162
+ if width <= height:
163
+ new_width = resolution
164
+ new_height = int(resolution / aspect_ratio)
165
+ else:
166
+ new_height = resolution
167
+ new_width = int(resolution * aspect_ratio)
168
+ resized_x = F.interpolate(x, size=(new_height, new_width), mode='bilinear', align_corners=True, antialias=True)
169
+ return resized_x
170
+
171
+ def _preprocess(video_data, short_size=128, crop_size=None):
172
+ transform = Compose(
173
+
174
+ [
175
+
176
+ Lambda(lambda x: ((x / 255.0) * 2 - 1)),
177
+ Lambda(lambda x: resize(x, short_size)),
178
+ (
179
+ CenterCropVideo(crop_size=crop_size)
180
+ if crop_size is not None
181
+ else Lambda(lambda x: x)
182
+ ),
183
+
184
+ ]
185
+
186
+ )
187
+ video_outputs = transform(video_data)
188
+ video_outputs = _format_video_shape(video_outputs)
189
+ return video_outputs
190
+
191
+
192
+ def _format_video_shape(video, time_compress=4, spatial_compress=8):
193
+ time = video.shape[1]
194
+ height = video.shape[2]
195
+ width = video.shape[3]
196
+ new_time = (
197
+ (time - (time - 1) % time_compress)
198
+ if (time - 1) % time_compress != 0
199
+ else time
200
+ )
201
+ new_height = (
202
+ (height - (height) % spatial_compress)
203
+ if height % spatial_compress != 0
204
+ else height
205
+ )
206
+ new_width = (
207
+ (width - (width) % spatial_compress) if width % spatial_compress != 0 else width
208
+ )
209
+ return video[:, :new_time, :new_height, :new_width]
210
+
211
+ @torch.no_grad()
212
+ def main(args: argparse.Namespace):
213
+ real_video_dir = args.real_video_dir
214
+ generated_video_dir = args.generated_video_dir
215
+ ckpt = args.ckpt
216
+ sample_rate = args.sample_rate
217
+ resolution = args.resolution
218
+ crop_size = args.crop_size
219
+ num_frames = args.num_frames
220
+ sample_rate = args.sample_rate
221
+ sample_fps = args.sample_fps
222
+ batch_size = args.batch_size
223
+ num_workers = args.num_workers
224
+ subset_size = args.subset_size
225
+
226
+ if not os.path.exists(args.generated_video_dir):
227
+ os.makedirs(os.path.join(generated_video_dir, "vae_gen/"), exist_ok=True)
228
+
229
+ data_type = torch.bfloat16
230
+
231
+ ddp_setup()
232
+ rank = int(os.environ["LOCAL_RANK"])
233
+
234
+ # ---- Load Model ----
235
+ cvvae = CVVAEModel.from_pretrained(ckpt)
236
+ print(cvvae)
237
+ cvvae = cvvae.to(rank).to(data_type)
238
+ cvvae.eval()
239
+
240
+ # ---- Load Model ----
241
+
242
+ # ---- Prepare Dataset ----
243
+ dataset = RealVideoDataset(
244
+ real_video_dir=real_video_dir,
245
+ num_frames=num_frames,
246
+ sample_rate=sample_rate,
247
+ crop_size=crop_size,
248
+ resolution=resolution,
249
+ )
250
+
251
+ if subset_size:
252
+ indices = range(subset_size)
253
+ dataset = Subset(dataset, indices=indices)
254
+ ddp_sampler = DistributedSampler(dataset)
255
+ dataloader = DataLoader(
256
+ dataset, batch_size=batch_size, sampler=ddp_sampler ,pin_memory=True, num_workers=num_workers
257
+ )
258
+ # ---- Prepare Dataset
259
+
260
+ # ---- Inference ----
261
+ for batch in tqdm(dataloader):
262
+ x, file_names = batch['video'], batch['file_name']
263
+
264
+ x = x.to(rank).to(data_type) # b c t h w
265
+ latent = cvvae.encode(x).latent_dist.sample()
266
+ video_recon = cvvae.decode(latent).sample
267
+ for idx, video in enumerate(video_recon):
268
+ output_path = os.path.join(generated_video_dir, "vae_gen/", file_names[idx])
269
+ if args.output_origin:
270
+ os.makedirs(os.path.join(generated_video_dir, "origin/"), exist_ok=True)
271
+ origin_output_path = os.path.join(generated_video_dir, "origin/", file_names[idx])
272
+ custom_to_video(
273
+ x[idx], fps=sample_fps / sample_rate, output_file=origin_output_path
274
+ )
275
+ custom_to_video(
276
+ video, fps=sample_fps / sample_rate, output_file=output_path
277
+ )
278
+ # ---- Inference ----
279
+
280
+ if __name__ == "__main__":
281
+ parser = argparse.ArgumentParser()
282
+ parser.add_argument("--real_video_dir", type=str, default="")
283
+ parser.add_argument("--generated_video_dir", type=str, default="")
284
+ parser.add_argument("--ckpt", type=str, default="")
285
+ parser.add_argument("--sample_fps", type=int, default=30)
286
+ parser.add_argument("--resolution", type=int, default=336)
287
+ parser.add_argument("--crop_size", type=int, default=None)
288
+ parser.add_argument("--num_frames", type=int, default=17)
289
+ parser.add_argument("--sample_rate", type=int, default=1)
290
+ parser.add_argument("--batch_size", type=int, default=1)
291
+ parser.add_argument("--num_workers", type=int, default=8)
292
+ parser.add_argument("--subset_size", type=int, default=None)
293
+ parser.add_argument('--output_origin', action='store_true')
294
+ parser.add_argument("--config", type=str, default="")
295
+
296
+
297
+ args = parser.parse_args()
298
+ main(args)
scripts/rec_sd2_1_vae.py ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ import argparse
3
+ import cv2
4
+ from tqdm import tqdm
5
+ import numpy as np
6
+ import numpy.typing as npt
7
+ import torch
8
+ import torch.distributed as dist
9
+ from torch.nn.parallel import DistributedDataParallel as DDP
10
+ from torch.utils.data import DataLoader, DistributedSampler, Subset
11
+ from decord import VideoReader, cpu
12
+ from torch.nn import functional as F
13
+ from pytorchvideo.transforms import ShortSideScale
14
+ from torchvision.transforms import Lambda, Compose
15
+ from torchvision.transforms._transforms_video import CenterCropVideo
16
+ import sys
17
+ from torch.utils.data import Dataset, DataLoader, Subset
18
+ import os
19
+ import glob
20
+ sys.path.append(".")
21
+ import torch.nn as nn
22
+ import yaml
23
+ from omegaconf import OmegaConf
24
+ from einops import rearrange
25
+ from diffusers.models import AutoencoderKL
26
+
27
+ def ddp_setup():
28
+ dist.init_process_group(backend="nccl")
29
+ torch.cuda.set_device(int(os.environ["LOCAL_RANK"]))
30
+
31
+ def array_to_video(
32
+ image_array: npt.NDArray, fps: float = 30.0, output_file: str = "output_video.mp4"
33
+ ) -> None:
34
+ height, width, channels = image_array[0].shape
35
+ fourcc = cv2.VideoWriter_fourcc(*"mp4v")
36
+ video_writer = cv2.VideoWriter(output_file, fourcc, float(fps), (width, height))
37
+
38
+ for image in image_array:
39
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
40
+ video_writer.write(image_rgb)
41
+
42
+ video_writer.release()
43
+
44
+
45
+ def custom_to_video(
46
+ x: torch.Tensor, fps: float = 2.0, output_file: str = "output_video.mp4"
47
+ ) -> None:
48
+ x = x.detach().cpu()
49
+ x = torch.clamp(x, -1, 1)
50
+ x = (x + 1) / 2
51
+ x = x.permute(1, 2, 3, 0).float().numpy()
52
+ x = (255 * x).astype(np.uint8)
53
+ array_to_video(x, fps=fps, output_file=output_file)
54
+ return
55
+
56
+
57
+ def read_video(video_path: str, num_frames: int, sample_rate: int) -> torch.Tensor:
58
+ decord_vr = VideoReader(video_path, ctx=cpu(0), num_threads=8)
59
+ total_frames = len(decord_vr)
60
+ sample_frames_len = sample_rate * num_frames
61
+
62
+ if total_frames > sample_frames_len:
63
+ s = 0
64
+ e = s + sample_frames_len
65
+ num_frames = num_frames
66
+ else:
67
+ s = 0
68
+ e = total_frames
69
+ num_frames = int(total_frames / sample_frames_len * num_frames)
70
+ print(
71
+ f"sample_frames_len {sample_frames_len}, only can sample {num_frames * sample_rate}",
72
+ video_path,
73
+ total_frames,
74
+ )
75
+
76
+ frame_id_list = np.linspace(s, e - 1, num_frames, dtype=int)
77
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
78
+ video_data = torch.from_numpy(video_data)
79
+ video_data = video_data.permute(3, 0, 1, 2) # (T, H, W, C) -> (C, T, H, W)
80
+ return video_data
81
+
82
+
83
+ class RealVideoDataset(Dataset):
84
+ video_exts = ['avi', 'mp4', 'webm']
85
+
86
+ def __init__(
87
+ self,
88
+ real_video_dir,
89
+ num_frames,
90
+ sample_rate=1,
91
+ crop_size=None,
92
+ resolution=128,
93
+ ) -> None:
94
+ super().__init__()
95
+ self.real_video_files = self._combine_without_prefix(real_video_dir)
96
+ self.num_frames = num_frames
97
+ self.sample_rate = sample_rate
98
+ self.crop_size = crop_size
99
+ self.short_size = resolution
100
+
101
+ def __len__(self):
102
+ return len(self.real_video_files)
103
+
104
+ def __getitem__(self, index):
105
+ try:
106
+ if index >= len(self):
107
+ raise IndexError
108
+ real_video_file = self.real_video_files[index]
109
+ real_video_tensor = self._load_video(real_video_file)
110
+ video_name = os.path.basename(real_video_file)
111
+ except:
112
+ if index >= len(self):
113
+ raise IndexError
114
+ real_video_file = self.real_video_files[random.randint(1,index-1)]
115
+ real_video_tensor = self._load_video(real_video_file)
116
+ video_name = os.path.basename(real_video_file)
117
+ return {'video': real_video_tensor, 'file_name': video_name }
118
+
119
+ def _load_video(self, video_path):
120
+ num_frames = self.num_frames
121
+ sample_rate = self.sample_rate
122
+ decord_vr = VideoReader(video_path, ctx=cpu(0))
123
+ total_frames = len(decord_vr)
124
+ sample_frames_len = sample_rate * num_frames
125
+ s = 0
126
+ e = s + sample_frames_len
127
+ num_frames = num_frames
128
+ """
129
+ if total_frames > sample_frames_len:
130
+ s = 0
131
+ e = s + sample_frames_len
132
+ num_frames = num_frames
133
+
134
+ else:
135
+ s = 0
136
+ e = total_frames
137
+ num_frames = int(total_frames / sample_frames_len * num_frames)
138
+ print(
139
+ f"sample_frames_len {sample_frames_len}, only can sample {num_frames * sample_rate}",
140
+ video_path,
141
+ total_frames,
142
+ )
143
+ """
144
+ frame_id_list = np.linspace(s, e - 1, num_frames, dtype=int)
145
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
146
+ video_data = torch.from_numpy(video_data)
147
+ video_data = video_data.permute(3, 0, 1, 2)
148
+ return _preprocess(
149
+ video_data, short_size=self.short_size, crop_size=self.crop_size
150
+ )
151
+
152
+ def _combine_without_prefix(self, folder_path):
153
+ samples = []
154
+ samples += sum([glob.glob(os.path.join(folder_path, '**', f'*.{ext}'), recursive=True)
155
+ for ext in self.video_exts], [])
156
+ samples.sort()
157
+ return samples
158
+
159
+ def resize(x, resolution):
160
+ height, width = x.shape[-2:]
161
+ aspect_ratio = width / height
162
+ if width <= height:
163
+ new_width = resolution
164
+ new_height = int(resolution / aspect_ratio)
165
+ else:
166
+ new_height = resolution
167
+ new_width = int(resolution * aspect_ratio)
168
+ resized_x = F.interpolate(x, size=(new_height, new_width), mode='bilinear', align_corners=True, antialias=True)
169
+ return resized_x
170
+
171
+ def _preprocess(video_data, short_size=128, crop_size=None):
172
+ transform = Compose(
173
+
174
+ [
175
+
176
+ Lambda(lambda x: ((x / 255.0) * 2 - 1)),
177
+ Lambda(lambda x: resize(x, short_size)),
178
+ (
179
+ CenterCropVideo(crop_size=crop_size)
180
+ if crop_size is not None
181
+ else Lambda(lambda x: x)
182
+ ),
183
+
184
+ ]
185
+
186
+ )
187
+ video_outputs = transform(video_data)
188
+ video_outputs = _format_video_shape(video_outputs)
189
+ return video_outputs
190
+
191
+
192
+ def _format_video_shape(video, time_compress=4, spatial_compress=8):
193
+ time = video.shape[1]
194
+ height = video.shape[2]
195
+ width = video.shape[3]
196
+ new_time = (
197
+ (time - (time - 1) % time_compress)
198
+ if (time - 1) % time_compress != 0
199
+ else time
200
+ )
201
+ new_height = (
202
+ (height - (height) % spatial_compress)
203
+ if height % spatial_compress != 0
204
+ else height
205
+ )
206
+ new_width = (
207
+ (width - (width) % spatial_compress) if width % spatial_compress != 0 else width
208
+ )
209
+ return video[:, :new_time, :new_height, :new_width]
210
+
211
+ @torch.no_grad()
212
+ def main(args: argparse.Namespace):
213
+ real_video_dir = args.real_video_dir
214
+ generated_video_dir = args.generated_video_dir
215
+ ckpt = args.ckpt
216
+ sample_rate = args.sample_rate
217
+ resolution = args.resolution
218
+ crop_size = args.crop_size
219
+ num_frames = args.num_frames
220
+ sample_rate = args.sample_rate
221
+ sample_fps = args.sample_fps
222
+ batch_size = args.batch_size
223
+ num_workers = args.num_workers
224
+ subset_size = args.subset_size
225
+
226
+ if not os.path.exists(args.generated_video_dir):
227
+ os.makedirs(os.path.join(generated_video_dir, "vae_gen/"), exist_ok=True)
228
+
229
+ data_type = torch.bfloat16
230
+
231
+ ddp_setup()
232
+ rank = int(os.environ["LOCAL_RANK"])
233
+
234
+ # ---- Load Model ----
235
+ sd2_1_vae = AutoencoderKL.from_pretrained(ckpt)
236
+ print(sd2_1_vae)
237
+ sd2_1_vae = sd2_1_vae.to(rank).to(data_type)
238
+ sd2_1_vae.eval()
239
+
240
+ # ---- Load Model ----
241
+
242
+ # ---- Prepare Dataset ----
243
+ dataset = RealVideoDataset(
244
+ real_video_dir=real_video_dir,
245
+ num_frames=num_frames,
246
+ sample_rate=sample_rate,
247
+ crop_size=crop_size,
248
+ resolution=resolution,
249
+ )
250
+
251
+ if subset_size:
252
+ indices = range(subset_size)
253
+ dataset = Subset(dataset, indices=indices)
254
+ ddp_sampler = DistributedSampler(dataset)
255
+ dataloader = DataLoader(
256
+ dataset, batch_size=batch_size, sampler=ddp_sampler ,pin_memory=True, num_workers=num_workers
257
+ )
258
+ # ---- Prepare Dataset
259
+
260
+ # ---- Inference ----
261
+ for batch in tqdm(dataloader):
262
+ x, file_names = batch['video'], batch['file_name']
263
+
264
+ x = x.to(rank).to(data_type) # b c t h w
265
+ t = x.shape[2]
266
+ x = rearrange(x, "b c t h w -> (b t) c h w", t=t)
267
+ latents = sd2_1_vae.encode(x)['latent_dist'].sample()
268
+ video_recon = sd2_1_vae.decode(latents.to(data_type))['sample']
269
+ video_recon = rearrange(video_recon, "(b t) c h w -> b c t h w", t=t)
270
+ x = rearrange(x, "(b t) c h w -> b c t h w", t=t)
271
+ for idx, video in enumerate(video_recon):
272
+ output_path = os.path.join(generated_video_dir, "vae_gen/", file_names[idx])
273
+ if args.output_origin:
274
+ os.makedirs(os.path.join(generated_video_dir, "origin/"), exist_ok=True)
275
+ origin_output_path = os.path.join(generated_video_dir, "origin/", file_names[idx])
276
+ custom_to_video(
277
+ x[idx], fps=sample_fps / sample_rate, output_file=origin_output_path
278
+ )
279
+ custom_to_video(
280
+ video, fps=sample_fps / sample_rate, output_file=output_path
281
+ )
282
+ # ---- Inference ----
283
+
284
+ if __name__ == "__main__":
285
+ parser = argparse.ArgumentParser()
286
+ parser.add_argument("--real_video_dir", type=str, default="")
287
+ parser.add_argument("--generated_video_dir", type=str, default="")
288
+ parser.add_argument("--ckpt", type=str, default="")
289
+ parser.add_argument("--sample_fps", type=int, default=30)
290
+ parser.add_argument("--resolution", type=int, default=336)
291
+ parser.add_argument("--crop_size", type=int, default=None)
292
+ parser.add_argument("--num_frames", type=int, default=17)
293
+ parser.add_argument("--sample_rate", type=int, default=1)
294
+ parser.add_argument("--batch_size", type=int, default=1)
295
+ parser.add_argument("--num_workers", type=int, default=8)
296
+ parser.add_argument("--subset_size", type=int, default=None)
297
+ parser.add_argument('--output_origin', action='store_true')
298
+ parser.add_argument("--config", type=str, default="")
299
+
300
+
301
+ args = parser.parse_args()
302
+ main(args)
scripts/reconstruction.sh ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CUDA_VISIBLE_DEVICES=0 python examples/rec_imvi_vae.py \
2
+ --ae_path /remote-home1/lzj/results/latest_488_reset/test \
3
+ --video_path visual/134445.mp4 \
4
+ --rec_path rec488.mp4 \
5
+ --device cuda \
6
+ --sample_rate 1 \
7
+ --num_frames 65 \
8
+ --resolution 512 \
9
+ --crop_size 512 \
10
+ --ae CausalVAEModel_4x8x8 \
11
+ --enable_tiling
scripts/refine_video.py ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ import argparse
3
+ import cv2
4
+ from tqdm import tqdm
5
+ import numpy as np
6
+ import numpy.typing as npt
7
+ import torch
8
+ from decord import VideoReader, cpu
9
+ from torch.nn import functional as F
10
+ from pytorchvideo.transforms import ShortSideScale
11
+ from torchvision.transforms import Lambda, Compose
12
+ from torchvision.transforms._transforms_video import CenterCropVideo
13
+ import sys
14
+ from torch.utils.data import Dataset, DataLoader, Subset
15
+ import os
16
+ import glob
17
+ sys.path.append(".")
18
+ from causalvideovae.model import Refiner
19
+ import torch.nn as nn
20
+
21
+
22
+ def array_to_video(
23
+ image_array: npt.NDArray, fps: float = 30.0, output_file: str = "output_video.mp4"
24
+ ) -> None:
25
+ height, width, channels = image_array[0].shape
26
+ fourcc = cv2.VideoWriter_fourcc(*"mp4v")
27
+ video_writer = cv2.VideoWriter(output_file, fourcc, float(fps), (width, height))
28
+
29
+ for image in image_array:
30
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
31
+ video_writer.write(image_rgb)
32
+
33
+ video_writer.release()
34
+
35
+
36
+ def custom_to_video(
37
+ x: torch.Tensor, fps: float = 2.0, output_file: str = "output_video.mp4"
38
+ ) -> None:
39
+ x = x.detach().cpu()
40
+ x = torch.clamp(x, -1, 1)
41
+ x = (x + 1) / 2
42
+ x = x.permute(1, 2, 3, 0).float().numpy()
43
+ x = (255 * x).astype(np.uint8)
44
+ array_to_video(x, fps=fps, output_file=output_file)
45
+ return
46
+
47
+
48
+ def read_video(video_path: str, num_frames: int, sample_rate: int) -> torch.Tensor:
49
+ decord_vr = VideoReader(video_path, ctx=cpu(0), num_threads=8)
50
+ total_frames = len(decord_vr)
51
+ sample_frames_len = sample_rate * num_frames
52
+
53
+ if total_frames > sample_frames_len:
54
+ s = 0
55
+ e = s + sample_frames_len
56
+ num_frames = num_frames
57
+ else:
58
+ s = 0
59
+ e = total_frames
60
+ num_frames = int(total_frames / sample_frames_len * num_frames)
61
+ print(
62
+ f"sample_frames_len {sample_frames_len}, only can sample {num_frames * sample_rate}",
63
+ video_path,
64
+ total_frames,
65
+ )
66
+
67
+ frame_id_list = np.linspace(s, e - 1, num_frames, dtype=int)
68
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
69
+ video_data = torch.from_numpy(video_data)
70
+ video_data = video_data.permute(3, 0, 1, 2) # (T, H, W, C) -> (C, T, H, W)
71
+ return video_data
72
+
73
+
74
+ class RealVideoDataset(Dataset):
75
+ video_exts = ['avi', 'mp4', 'webm']
76
+
77
+ def __init__(
78
+ self,
79
+ real_video_dir,
80
+ num_frames,
81
+ sample_rate=1,
82
+ crop_size=None,
83
+ resolution=128,
84
+ ) -> None:
85
+ super().__init__()
86
+ self.real_video_files = self._combine_without_prefix(real_video_dir)
87
+ self.num_frames = num_frames
88
+ self.sample_rate = sample_rate
89
+ self.crop_size = crop_size
90
+ self.short_size = resolution
91
+
92
+ def __len__(self):
93
+ return len(self.real_video_files)
94
+
95
+ def __getitem__(self, index):
96
+ try:
97
+ if index >= len(self):
98
+ raise IndexError
99
+ real_video_file = self.real_video_files[index]
100
+ real_video_tensor = self._load_video(real_video_file)
101
+ video_name = os.path.basename(real_video_file)
102
+ except:
103
+ if index >= len(self):
104
+ raise IndexError
105
+ real_video_file = self.real_video_files[random.randint(1,index-1)]
106
+ real_video_tensor = self._load_video(real_video_file)
107
+ video_name = os.path.basename(real_video_file)
108
+ return {'video': real_video_tensor, 'file_name': video_name }
109
+
110
+ def _load_video(self, video_path):
111
+ num_frames = self.num_frames
112
+ sample_rate = self.sample_rate
113
+ decord_vr = VideoReader(video_path, ctx=cpu(0))
114
+ total_frames = len(decord_vr)
115
+ sample_frames_len = sample_rate * num_frames
116
+ s = 0
117
+ e = s + sample_frames_len
118
+ num_frames = num_frames
119
+ """
120
+ if total_frames > sample_frames_len:
121
+ s = 0
122
+ e = s + sample_frames_len
123
+ num_frames = num_frames
124
+
125
+ else:
126
+ s = 0
127
+ e = total_frames
128
+ num_frames = int(total_frames / sample_frames_len * num_frames)
129
+ print(
130
+ f"sample_frames_len {sample_frames_len}, only can sample {num_frames * sample_rate}",
131
+ video_path,
132
+ total_frames,
133
+ )
134
+ """
135
+ frame_id_list = np.linspace(s, e - 1, num_frames, dtype=int)
136
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
137
+ video_data = torch.from_numpy(video_data)
138
+ video_data = video_data.permute(3, 0, 1, 2)
139
+ return _preprocess(
140
+ video_data, short_size=self.short_size, crop_size=self.crop_size
141
+ )
142
+
143
+ def _combine_without_prefix(self, folder_path):
144
+ samples = []
145
+ samples += sum([glob.glob(os.path.join(folder_path, '**', f'*.{ext}'), recursive=True)
146
+ for ext in self.video_exts], [])
147
+ samples.sort()
148
+ return samples
149
+
150
+ def resize(x, resolution):
151
+ height, width = x.shape[-2:]
152
+ aspect_ratio = width / height
153
+ if width <= height:
154
+ new_width = resolution
155
+ new_height = int(resolution / aspect_ratio)
156
+ else:
157
+ new_height = resolution
158
+ new_width = int(resolution * aspect_ratio)
159
+ resized_x = F.interpolate(x, size=(new_height, new_width), mode='bilinear', align_corners=True, antialias=True)
160
+ return resized_x
161
+
162
+ def _preprocess(video_data, short_size=128, crop_size=None):
163
+ transform = Compose(
164
+ [
165
+ Lambda(lambda x: ((x / 255.0) * 2 - 1)),
166
+ Lambda(lambda x: resize(x, short_size)),
167
+ (
168
+ CenterCropVideo(crop_size=crop_size)
169
+ if crop_size is not None
170
+ else Lambda(lambda x: x)
171
+ ),
172
+ ]
173
+ )
174
+ video_outputs = transform(video_data)
175
+ video_outputs = _format_video_shape(video_outputs)
176
+ return video_outputs
177
+
178
+
179
+ def _format_video_shape(video, time_compress=4, spatial_compress=8):
180
+ time = video.shape[1]
181
+ height = video.shape[2]
182
+ width = video.shape[3]
183
+ new_time = (
184
+ (time - (time - 1) % time_compress)
185
+ if (time - 1) % time_compress != 0
186
+ else time
187
+ )
188
+ new_height = (
189
+ (height - (height) % spatial_compress)
190
+ if height % spatial_compress != 0
191
+ else height
192
+ )
193
+ new_width = (
194
+ (width - (width) % spatial_compress) if width % spatial_compress != 0 else width
195
+ )
196
+ return video[:, :new_time, :new_height, :new_width]
197
+
198
+
199
+ @torch.no_grad()
200
+ def main(args: argparse.Namespace):
201
+ real_video_dir = args.real_video_dir
202
+ generated_video_dir = args.generated_video_dir
203
+ ckpt = args.ckpt
204
+ sample_rate = args.sample_rate
205
+ resolution = args.resolution
206
+ crop_size = args.crop_size
207
+ num_frames = args.num_frames
208
+ sample_rate = args.sample_rate
209
+ device = args.device
210
+ sample_fps = args.sample_fps
211
+ batch_size = args.batch_size
212
+ num_workers = args.num_workers
213
+ subset_size = args.subset_size
214
+
215
+ if not os.path.exists(args.generated_video_dir):
216
+ os.makedirs(args.generated_video_dir, exist_ok=True)
217
+
218
+ data_type = torch.bfloat16
219
+
220
+ # ---- Load Model ----
221
+ device = args.device
222
+ refiner = Refiner.from_pretrained(args.ckpt)
223
+ print(refiner)
224
+ refiner = refiner.to(device).to(data_type)
225
+ refiner.eval()
226
+ # ---- Load Model ----
227
+
228
+ # ---- Prepare Dataset ----
229
+ dataset = RealVideoDataset(
230
+ real_video_dir=real_video_dir,
231
+ num_frames=num_frames,
232
+ sample_rate=sample_rate,
233
+ crop_size=crop_size,
234
+ resolution=resolution,
235
+ )
236
+
237
+ if subset_size:
238
+ indices = range(subset_size)
239
+ dataset = Subset(dataset, indices=indices)
240
+
241
+ dataloader = DataLoader(
242
+ dataset, batch_size=batch_size, pin_memory=True, num_workers=num_workers
243
+ )
244
+ # ---- Prepare Dataset
245
+
246
+ # ---- Inference ----
247
+ for batch in tqdm(dataloader):
248
+ x, file_names = batch['video'], batch['file_name']
249
+
250
+ x = x.to(device=device, dtype=data_type) # b c t h w
251
+ T, H, W = (x.shape[2], x.shape[3], x.shape[4])
252
+ chunk_T, chunk_H, chunk_W = (24,256,256)
253
+
254
+ video_recon = torch.zeros(x.shape)
255
+ for t in range(0, T, chunk_T):
256
+ for h in range(0, H, chunk_H):
257
+ for w in range(0, W, chunk_W):
258
+ # 计算当前块的边界
259
+ t_end = min(t + chunk_T, T)
260
+ h_end = min(h + chunk_H, H)
261
+ w_end = min(w + chunk_W, W)
262
+
263
+ # 切片并添加到列表中
264
+ video_recon[:,:,t:t_end, h:h_end, w:w_end] = refiner(x[:,:,t:t_end, h:h_end, w:w_end])
265
+
266
+ for idx, video in enumerate(video_recon):
267
+ output_path = os.path.join(generated_video_dir, file_names[idx])
268
+ if args.output_origin:
269
+ os.makedirs(os.path.join(generated_video_dir, "origin/"), exist_ok=True)
270
+ origin_output_path = os.path.join(generated_video_dir, "origin/", file_names[idx])
271
+ custom_to_video(
272
+ x[idx], fps=sample_fps / sample_rate, output_file=origin_output_path
273
+ )
274
+ custom_to_video(
275
+ video, fps=sample_fps / sample_rate, output_file=output_path
276
+ )
277
+ # ---- Inference ----
278
+
279
+ if __name__ == "__main__":
280
+ parser = argparse.ArgumentParser()
281
+ parser.add_argument("--real_video_dir", type=str, default="")
282
+ parser.add_argument("--generated_video_dir", type=str, default="")
283
+ parser.add_argument("--ckpt", type=str, default="")
284
+ parser.add_argument("--sample_fps", type=int, default=30)
285
+ parser.add_argument("--resolution", type=int, default=336)
286
+ parser.add_argument("--crop_size", type=int, default=None)
287
+ parser.add_argument("--num_frames", type=int, default=17)
288
+ parser.add_argument("--sample_rate", type=int, default=1)
289
+ parser.add_argument("--batch_size", type=int, default=1)
290
+ parser.add_argument("--num_workers", type=int, default=8)
291
+ parser.add_argument("--subset_size", type=int, default=None)
292
+ parser.add_argument("--tile_overlap_factor", type=float, default=0.25)
293
+ parser.add_argument('--enable_tiling', action='store_true')
294
+ parser.add_argument('--output_origin', action='store_true')
295
+ parser.add_argument("--device", type=str, default="cuda")
296
+
297
+ args = parser.parse_args()
298
+ main(args)
299
+
scripts/vae_demo.py ADDED
@@ -0,0 +1,337 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ import argparse
3
+ import cv2
4
+ from tqdm import tqdm
5
+ import numpy as np
6
+ import numpy.typing as npt
7
+ import torch
8
+ import torch.distributed as dist
9
+ from torch.nn.parallel import DistributedDataParallel as DDP
10
+ from torch.utils.data import DataLoader, DistributedSampler, Subset
11
+ from decord import VideoReader, cpu
12
+ from torch.nn import functional as F
13
+ from pytorchvideo.transforms import ShortSideScale
14
+ from torchvision.transforms import Lambda, Compose
15
+ from torchvision.transforms._transforms_video import CenterCropVideo
16
+ import sys
17
+ from torch.utils.data import Dataset, DataLoader, Subset
18
+ import os
19
+ import glob
20
+ sys.path.append(".")
21
+ from causalvideovae.model import CausalVAEModel
22
+ from diffusers.models import AutoencoderKL
23
+ from diffusers.models import AutoencoderKLTemporalDecoder
24
+ from CV_VAE.models.modeling_vae import CVVAEModel
25
+ from opensora.registry import MODELS, build_module
26
+ from opensora.utils.config_utils import parse_configs
27
+ from opensora.registry import MODELS, build_module
28
+ from opensora.utils.config_utils import parse_configs
29
+ import gradio as gr
30
+ from functools import partial
31
+ from einops import rearrange
32
+ import torchvision.transforms as transforms
33
+ from PIL import Image
34
+ import time
35
+ import imageio
36
+
37
+ # 创建一个transform,用于中心裁剪图像到指定的大小
38
+ transform = transforms.Compose([
39
+ transforms.CenterCrop(512),
40
+ ])
41
+
42
+ def array_to_video(
43
+ image_array: npt.NDArray, fps: float = 30.0, output_file: str = "output_video.mp4"
44
+ ) -> None:
45
+ height, width, channels = image_array[0].shape
46
+ imageio.mimwrite(output_file, image_array, fps=fps, quality=6,)
47
+ """
48
+ fourcc = cv2.VideoWriter_fourcc(*"mp4v")
49
+ video_writer = cv2.VideoWriter(output_file, fourcc, float(fps), (width, height))
50
+ for image in image_array:
51
+ image_rgb = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
52
+ video_writer.write(image_rgb)
53
+
54
+ video_writer.release()
55
+ """
56
+
57
+ def custom_to_video(
58
+ x: torch.Tensor, fps: float = 2.0, output_file: str = "output_video.mp4"
59
+ ) -> None:
60
+ x = x.detach().cpu()
61
+ x = torch.clamp(x, -1, 1)
62
+ x = (x + 1) / 2
63
+ x = x.permute(1, 2, 3, 0).float().numpy()
64
+ x = (255 * x).astype(np.uint8)
65
+ array_to_video(x, fps=fps, output_file=output_file)
66
+ return
67
+
68
+ def _format_video_shape(video, time_compress=4, spatial_compress=8):
69
+ time = video.shape[1]
70
+ height = video.shape[2]
71
+ width = video.shape[3]
72
+ new_time = (
73
+ (time - (time - 1) % time_compress)
74
+ if (time - 1) % time_compress != 0
75
+ else time
76
+ )
77
+ new_height = (
78
+ (height - (height) % spatial_compress)
79
+ if height % spatial_compress != 0
80
+ else height
81
+ )
82
+ new_width = (
83
+ (width - (width) % spatial_compress) if width % spatial_compress != 0 else width
84
+ )
85
+ return video[:, :new_time, :new_height, :new_width]
86
+
87
+
88
+ @torch.no_grad()
89
+ def rec_nusvae(input_file):
90
+
91
+ nus_vae_path = '/storage/clh/Causal-Video-VAE/gradio/nus_vae_temp/video.mp4'
92
+
93
+ if input_file.endswith(('.jpg', '.jpeg', '.png', '.gif', '.bmp')):
94
+ #处理图像
95
+ image = cv2.imread(input_file, cv2.IMREAD_COLOR)
96
+ image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
97
+ fps=10
98
+ total_frames = 1
99
+ video_data = torch.from_numpy(image)
100
+ video_data = video_data.unsqueeze(0)
101
+ video_data = video_data.permute(3, 0, 1, 2)
102
+ video_data = (video_data / 255.0) * 2 - 1
103
+ video_data = _format_video_shape(video_data)
104
+ video_data = video_data.unsqueeze(0)
105
+ video_data = video_data.to(dtype=data_type) # b c t h w
106
+
107
+ elif input_file.endswith(('.mp4', '.avi', '.mov', '.wmv')):
108
+ # 处理视频
109
+ decord_vr = VideoReader(input_file, ctx=cpu(0))
110
+ total_frames = len(decord_vr)
111
+ video = cv2.VideoCapture(input_file)
112
+ fps = video.get(cv2.CAP_PROP_FPS)
113
+ frame_id_list = np.linspace(0, total_frames-1, total_frames, dtype=int)
114
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
115
+ video_data = torch.from_numpy(video_data)
116
+ video_data = video_data.permute(3, 0, 1, 2)
117
+ video_data = (video_data / 255.0) * 2 - 1
118
+ video_data = _format_video_shape(video_data)
119
+ video_data = video_data.unsqueeze(0)
120
+ video_data = video_data.to(dtype=data_type) # b c t h w
121
+
122
+ video_data = video_data.to(device4)
123
+ latents, posterior, x_z = nus_vae.encode(video_data)
124
+ video_recon, x_z_rec = nus_vae.decode(latents, num_frames=video_data.size(2))
125
+ custom_to_video(video_recon[0], fps=fps, output_file=nus_vae_path)
126
+ time.sleep(15)
127
+
128
+ return nus_vae_path
129
+
130
+ @torch.no_grad()
131
+ def rec_cvvae(input_file):
132
+
133
+ cv_vae_path = '/storage/clh/Causal-Video-VAE/gradio/cv_vae_temp/video.mp4'
134
+
135
+ if input_file.endswith(('.jpg', '.jpeg', '.png', '.gif', '.bmp')):
136
+ #处理图像
137
+ image = cv2.imread(input_file, cv2.IMREAD_COLOR)
138
+ image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
139
+ fps=10
140
+ total_frames = 1
141
+ video_data = torch.from_numpy(image)
142
+ video_data = video_data.unsqueeze(0)
143
+ video_data = video_data.permute(3, 0, 1, 2)
144
+ video_data = (video_data / 255.0) * 2 - 1
145
+ video_data = _format_video_shape(video_data)
146
+ video_data = video_data.unsqueeze(0)
147
+ video_data = video_data.to(dtype=data_type) # b c t h w
148
+
149
+ elif input_file.endswith(('.mp4', '.avi', '.mov', '.wmv')):
150
+ # 处理视频
151
+ decord_vr = VideoReader(input_file, ctx=cpu(0))
152
+ total_frames = len(decord_vr)
153
+ video = cv2.VideoCapture(input_file)
154
+ fps = video.get(cv2.CAP_PROP_FPS)
155
+ frame_id_list = np.linspace(0, total_frames-1, total_frames, dtype=int)
156
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
157
+ video_data = torch.from_numpy(video_data)
158
+ video_data = video_data.permute(3, 0, 1, 2)
159
+ video_data = (video_data / 255.0) * 2 - 1
160
+ video_data = _format_video_shape(video_data)
161
+ video_data = video_data.unsqueeze(0)
162
+ video_data = video_data.to(dtype=data_type) # b c t h w
163
+
164
+ video_data = video_data.to(device3)
165
+ latent = cvvae.encode(video_data).latent_dist.sample()
166
+ video_recon = cvvae.decode(latent).sample
167
+ custom_to_video(video_recon[0], fps=fps, output_file=cv_vae_path)
168
+ time.sleep(10)
169
+ return cv_vae_path
170
+
171
+ @torch.no_grad()
172
+ def rec_our12(input_file):
173
+
174
+ our_vae_path = '/storage/clh/Causal-Video-VAE/gradio/our_temp/video.mp4'
175
+ if input_file.endswith(('.jpg', '.jpeg', '.png', '.gif', '.bmp')):
176
+ #处理图像
177
+ image = cv2.imread(input_file, cv2.IMREAD_COLOR)
178
+ image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
179
+ fps=10
180
+ total_frames = 1
181
+ video_data = torch.from_numpy(image)
182
+ video_data = video_data.unsqueeze(0)
183
+ video_data = video_data.permute(3, 0, 1, 2)
184
+ video_data = (video_data / 255.0) * 2 - 1
185
+ video_data = _format_video_shape(video_data)
186
+ video_data = video_data.unsqueeze(0)
187
+ video_data = video_data.to(dtype=data_type) # b c t h w
188
+
189
+ elif input_file.endswith(('.mp4', '.avi', '.mov', '.wmv')):
190
+ # 处理视频
191
+ decord_vr = VideoReader(input_file, ctx=cpu(0))
192
+ total_frames = len(decord_vr)
193
+ video = cv2.VideoCapture(input_file)
194
+ fps = video.get(cv2.CAP_PROP_FPS)
195
+ frame_id_list = np.linspace(0, total_frames-1, total_frames, dtype=int)
196
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
197
+ video_data = torch.from_numpy(video_data)
198
+ video_data = video_data.permute(3, 0, 1, 2)
199
+ video_data = (video_data / 255.0) * 2 - 1
200
+ video_data = _format_video_shape(video_data)
201
+ video_data = video_data.unsqueeze(0)
202
+ video_data = video_data.to(dtype=data_type) # b c t h w
203
+
204
+ ##我们的VAE的输出
205
+ input_data = video_data.clone()
206
+ input_data = input_data.to(device0)
207
+ latents = vqvae.encode(input_data).sample().to(data_type)
208
+ video_recon = vqvae.decode(latents)
209
+ custom_to_video(video_recon[0], fps=fps, output_file=our_vae_path)
210
+
211
+ return our_vae_path
212
+
213
+ @torch.no_grad()
214
+ def rec_new(input_file):
215
+
216
+ our_vae_path = '/storage/clh/Causal-Video-VAE/gradio/new_temp/video.mp4'
217
+ if input_file.endswith(('.jpg', '.jpeg', '.png', '.gif', '.bmp')):
218
+ #处理图像
219
+ image = cv2.imread(input_file, cv2.IMREAD_COLOR)
220
+ image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
221
+ fps=10
222
+ total_frames = 1
223
+ video_data = torch.from_numpy(image)
224
+ video_data = video_data.unsqueeze(0)
225
+ video_data = video_data.permute(3, 0, 1, 2)
226
+ video_data = (video_data / 255.0) * 2 - 1
227
+ video_data = _format_video_shape(video_data)
228
+ video_data = video_data.unsqueeze(0)
229
+ video_data = video_data.to(dtype=data_type) # b c t h w
230
+
231
+ elif input_file.endswith(('.mp4', '.avi', '.mov', '.wmv')):
232
+ # 处理视频
233
+ decord_vr = VideoReader(input_file, ctx=cpu(0))
234
+ total_frames = len(decord_vr)
235
+ video = cv2.VideoCapture(input_file)
236
+ fps = video.get(cv2.CAP_PROP_FPS)
237
+ frame_id_list = np.linspace(0, total_frames-1, total_frames, dtype=int)
238
+ video_data = decord_vr.get_batch(frame_id_list).asnumpy()
239
+ video_data = torch.from_numpy(video_data)
240
+ video_data = video_data.permute(3, 0, 1, 2)
241
+ video_data = (video_data / 255.0) * 2 - 1
242
+ video_data = _format_video_shape(video_data)
243
+ video_data = video_data.unsqueeze(0)
244
+ video_data = video_data.to(dtype=data_type) # b c t h w
245
+
246
+ ##我们的VAE的输出
247
+ input_data = video_data.clone()
248
+ input_data = input_data.to(device0)
249
+ latents = newvae.encode(input_data).sample().to(data_type)
250
+ video_recon = newvae.decode(latents)
251
+ custom_to_video(video_recon[0], fps=fps, output_file=our_vae_path)
252
+
253
+ return our_vae_path
254
+
255
+ @torch.no_grad()
256
+ def show_origin(input_file):
257
+ return input_file
258
+
259
+ @torch.no_grad()
260
+ def main(args: argparse.Namespace):
261
+
262
+ # 创建输出界面
263
+
264
+ with gr.Blocks() as demo:
265
+ with gr.Row():
266
+ input_interface = gr.components.File(label="上传文件(图片或视频)")
267
+ with gr.Row():
268
+ output_video1 = gr.Video(label="原始视频或图片")
269
+ output_video2 = gr.Video(label="我们的3D VAE输出视频或图片")
270
+ with gr.Row():
271
+ show_origin_button = gr.components.Button("展示原始视频或图片")
272
+ show_origin_button.click(fn=show_origin, inputs=input_interface, outputs=output_video1)
273
+ our12_button = gr.components.Button("用我们的3D VAE重建视频或图片")
274
+ our12_button.click(fn=rec_our12, inputs=input_interface, outputs=output_video2)
275
+ with gr.Row():
276
+ output_video3 = gr.Video(label="CV-VAE VAE输出视频或图片")
277
+ output_video4 = gr.Video(label="Open-Sora VAE输出视频或图片")
278
+ with gr.Row():
279
+ cvvae_button = gr.components.Button("用CV VAE重建视频或图片")
280
+ cvvae_button.click(fn=rec_cvvae, inputs=input_interface, outputs=output_video3)
281
+ nusvae_button = gr.components.Button("用Open-Sora VAE重建视频或图片")
282
+ nusvae_button.click(fn=rec_nusvae, inputs=input_interface, outputs=output_video4)
283
+ """
284
+ with gr.Row():
285
+ output_video5 = gr.Video(label="我们最新内部版本VAE")
286
+ with gr.Row():
287
+ new_button = gr.components.Button("用新VAE重建视频或图片")
288
+ new_button.click(fn=rec_new, inputs=input_interface, outputs=output_video5)
289
+ """
290
+
291
+
292
+ demo.launch(server_name="0.0.0.0", server_port=11904)
293
+
294
+
295
+
296
+ if __name__ == "__main__":
297
+ parser = argparse.ArgumentParser()
298
+
299
+ parser.add_argument("--ckpt", type=str, default="")
300
+ parser.add_argument("--sample_fps", type=int, default=30)
301
+ parser.add_argument("--tile_overlap_factor", type=float, default=0.125)
302
+ parser.add_argument('--enable_tiling', action='store_true')
303
+ parser.add_argument("--device", type=str, default="cuda")
304
+ parser.add_argument("--config", type=str, default="cuda")
305
+ args = parser.parse_args()
306
+ device = args.device
307
+ data_type = torch.bfloat16
308
+ device0 = torch.device('cuda:2')
309
+ ckpt = '/storage/clh/models/488dim8_layernorm_nearst'
310
+ vqvae = CausalVAEModel.from_pretrained(ckpt)
311
+ if args.enable_tiling:
312
+ vqvae.enable_tiling()
313
+ vqvae.tile_overlap_factor = args.tile_overlap_factor
314
+ vqvae = vqvae.to(data_type).to(device0)
315
+ vqvae.eval()
316
+
317
+ device3 = torch.device('cuda:3')
318
+ ckpt = '/storage/clh/CV-VAE/vae3d'
319
+ cvvae = CVVAEModel.from_pretrained(ckpt)
320
+ cvvae = cvvae.to(device3).to(data_type)
321
+ cvvae.eval()
322
+ device4 = torch.device('cuda:4')
323
+ cfg = parse_configs(args, training=False)
324
+ nus_vae = build_module(cfg.model, MODELS)
325
+ nus_vae = nus_vae.to(device4).to(data_type)
326
+ nus_vae.eval()
327
+ """
328
+ device5 = torch.device('cuda:5')
329
+ ckpt = '/storage/clh/models/488dim8'
330
+ newvae = CausalVAEModel.from_pretrained(ckpt)
331
+ if args.enable_tiling:
332
+ newvae.enable_tiling()
333
+ newvae.tile_overlap_factor = args.tile_overlap_factor
334
+ newvae = vqvae.to(data_type).to(device0)
335
+ newvae.eval()
336
+ """
337
+ main(args)
scripts/vqgan_gen_video.sh ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
2
+ export NCCL_DEBUG=INFO
3
+ export NCCL_SOCKET_IFNAME=ibs11
4
+ export NCCL_IB_DISABLE=1
5
+ REAL_DATASET_DIR=/remote-home1/clh/dataset/panda70m_val
6
+ EXP_NAME=test_train
7
+ SAMPLE_RATE=1
8
+ NUM_FRAMES=33
9
+ RESOLUTION=256
10
+ SUBSET_SIZE=100
11
+ CKPT=/remote-home1/clh/taming-transformers/logs/vqgan_gumbel_f8/checkpoints/last.ckpt
12
+ CONFIG=/remote-home1/clh/taming-transformers/logs/vqgan_gumbel_f8/configs/model.yaml
13
+
14
+ torchrun \
15
+ --nnodes=1 --nproc_per_node=8 \
16
+ --rdzv_backend=c10d \
17
+ --rdzv_endpoint=localhost:29501 \
18
+ --master_addr=localhost \
19
+ --master_port=29600 \
20
+ scripts/rec_vqgan_vae.py \
21
+ --batch_size 8 \
22
+ --real_video_dir ${REAL_DATASET_DIR} \
23
+ --generated_video_dir /remote-home1/clh/gen/VQGAN/panda70m \
24
+ --sample_fps 24 \
25
+ --sample_rate ${SAMPLE_RATE} \
26
+ --num_frames ${NUM_FRAMES} \
27
+ --resolution ${RESOLUTION} \
28
+ --crop_size ${RESOLUTION} \
29
+ --num_workers 8 \
30
+ --ckpt ${CKPT} \
31
+ --config ${CONFIG} \
32
+ --output_origin \
train_ddp_refiner.py ADDED
@@ -0,0 +1,595 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import torch.distributed as dist
4
+ from torch.nn.parallel import DistributedDataParallel as DDP
5
+ from torch.utils.data import DataLoader, DistributedSampler, Subset
6
+ import argparse
7
+ import logging
8
+ from colorlog import ColoredFormatter
9
+ import tqdm
10
+ from itertools import chain
11
+ import wandb
12
+ import random
13
+ import numpy as np
14
+ from pathlib import Path
15
+ from einops import rearrange
16
+ from causalvideovae.model import Refiner, EMA, CausalVAEModel
17
+ from causalvideovae.utils.utils import RealVideoDataset
18
+ from causalvideovae.model.dataset_videobase import VideoDataset
19
+ from causalvideovae.model.utils.module_utils import resolve_str_to_obj
20
+ from causalvideovae.model.utils.video_utils import tensor_to_video
21
+ import time
22
+ try:
23
+ import lpips
24
+ except:
25
+ raise Exception("Need lpips to valid.")
26
+
27
+ def set_random_seed(seed):
28
+ random.seed(seed)
29
+ np.random.seed(seed)
30
+ torch.manual_seed(seed)
31
+ torch.cuda.manual_seed(seed)
32
+ torch.cuda.manual_seed_all(seed)
33
+
34
+
35
+ def ddp_setup():
36
+ dist.init_process_group(backend="nccl")
37
+ torch.cuda.set_device(int(os.environ["LOCAL_RANK"]))
38
+
39
+
40
+ def setup_logger(rank):
41
+ logger = logging.getLogger()
42
+ logger.setLevel(logging.INFO)
43
+ formatter = ColoredFormatter(
44
+ f"[rank{rank}] %(log_color)s%(asctime)s - %(levelname)s - %(message)s",
45
+ datefmt="%Y-%m-%d %H:%M:%S",
46
+ log_colors={
47
+ "DEBUG": "cyan",
48
+ "INFO": "green",
49
+ "WARNING": "yellow",
50
+ "ERROR": "red",
51
+ "CRITICAL": "bold_red",
52
+ },
53
+ reset=True,
54
+ style="%",
55
+ )
56
+ stream_handler = logging.StreamHandler()
57
+ stream_handler.setLevel(logging.DEBUG)
58
+ stream_handler.setFormatter(formatter)
59
+
60
+ if not logger.handlers:
61
+ logger.addHandler(stream_handler)
62
+
63
+ return logger
64
+
65
+
66
+ def check_unused_params(model):
67
+ unused_params = []
68
+ for name, param in model.named_parameters():
69
+ if param.grad is None:
70
+ unused_params.append(name)
71
+ return unused_params
72
+
73
+
74
+ def set_requires_grad_optimizer(optimizer, requires_grad):
75
+ for param_group in optimizer.param_groups:
76
+ for param in param_group["params"]:
77
+ param.requires_grad = requires_grad
78
+
79
+
80
+ def total_params(model):
81
+ total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
82
+ total_params_in_millions = total_params / 1e6
83
+ return int(total_params_in_millions)
84
+
85
+
86
+ def get_exp_name(args):
87
+ return f"{args.exp_name}-lr{args.lr:.2e}-bs{args.batch_size}-rs{args.resolution}-sr{args.sample_rate}-fr{args.num_frames}"
88
+
89
+ def set_train(modules):
90
+ for module in modules:
91
+ module.train()
92
+
93
+ def set_eval(modules):
94
+ for module in modules:
95
+ module.eval()
96
+
97
+ def set_modules_requires_grad(modules, requires_grad):
98
+ for module in modules:
99
+ module.requires_grad_(requires_grad)
100
+
101
+ def save_checkpoint(
102
+ epoch,
103
+ batch_idx,
104
+ optimizer_state,
105
+ state_dict,
106
+ scaler_state,
107
+ checkpoint_dir,
108
+ filename="checkpoint.ckpt",
109
+ ema_state_dict={}
110
+ ):
111
+ filepath = checkpoint_dir / Path(filename)
112
+ torch.save(
113
+ {
114
+ "epoch": epoch,
115
+ "batch_idx": batch_idx,
116
+ "optimizer_state": optimizer_state,
117
+ "state_dict": state_dict,
118
+ "ema_state_dict": ema_state_dict,
119
+ "scaler_state": scaler_state,
120
+ },
121
+ filepath,
122
+ )
123
+ return filepath
124
+
125
+
126
+ def valid(rank, model, vae, val_dataloader, precision, args):
127
+ if args.eval_lpips:
128
+ lpips_model = lpips.LPIPS(net='alex', spatial=True)
129
+ lpips_model.to(rank)
130
+ lpips_model = DDP(lpips_model, device_ids=[rank])
131
+ lpips_model.requires_grad_(False)
132
+ lpips_model.eval()
133
+
134
+ bar = None
135
+ if rank == 0:
136
+ bar = tqdm.tqdm(total=len(val_dataloader), desc="Validation...")
137
+
138
+ psnr_list = []
139
+ lpips_list = []
140
+ video_log = []
141
+ num_video_log = args.eval_num_video_log
142
+
143
+ with torch.no_grad():
144
+ for batch_idx, batch in enumerate(val_dataloader):
145
+ inputs = batch['video'].to(rank)
146
+ with torch.cuda.amp.autocast(dtype=precision):
147
+ latents = vae.encode(inputs).sample()
148
+ video_recon = vae.decode(latents)
149
+ refines = model(video_recon)
150
+
151
+ # Upload videos
152
+ if rank == 0:
153
+ for i in range(len(refines)):
154
+ if num_video_log <= 0:
155
+ break
156
+ refine_video = tensor_to_video(refines[i])
157
+ video_log.append(refine_video)
158
+ num_video_log -= 1
159
+
160
+ inputs = rearrange(inputs, "b c t h w -> (b t) c h w").contiguous()
161
+ refines = rearrange(refines, "b c t h w -> (b t) c h w").contiguous()
162
+
163
+ # Calculate PSNR
164
+ mse = torch.mean(torch.square(inputs - refines), dim=(1,2,3))
165
+ psnr = 20 * torch.log10(1 / torch.sqrt(mse))
166
+ psnr = psnr.mean().detach().cpu().item()
167
+
168
+ # Calculate LPIPS
169
+ if args.eval_lpips:
170
+ lpips_score = lpips_model.forward(inputs, refines).mean().detach().cpu().item()
171
+ lpips_list.append(lpips_score)
172
+
173
+ psnr_list.append(psnr)
174
+ if rank == 0:
175
+ bar.update()
176
+ # Release gpus memory
177
+ torch.cuda.empty_cache()
178
+ return psnr_list, lpips_list, video_log
179
+
180
+ def gather_valid_result(psnr_list, lpips_list, video_log_list, rank, world_size):
181
+ gathered_psnr_list = [None for _ in range(world_size)]
182
+ gathered_lpips_list = [None for _ in range(world_size)]
183
+ gathered_video_logs = [None for _ in range(world_size)]
184
+
185
+ dist.all_gather_object(gathered_psnr_list, psnr_list)
186
+ dist.all_gather_object(gathered_lpips_list, lpips_list)
187
+ dist.all_gather_object(gathered_video_logs, video_log_list)
188
+ return np.array(gathered_psnr_list).mean(), np.array(gathered_lpips_list).mean(), list(chain(*gathered_video_logs))
189
+
190
+ def train(args):
191
+ # Setup logger
192
+ ddp_setup()
193
+ rank = int(os.environ["LOCAL_RANK"])
194
+ logger = setup_logger(rank)
195
+
196
+ # Init
197
+ ckpt_dir = Path(args.ckpt_dir) / Path(get_exp_name(args))
198
+ if rank == 0:
199
+ try:
200
+ ckpt_dir.mkdir(exist_ok=False, parents=True)
201
+ except:
202
+ logger.warning(f"`{ckpt_dir}` exists!")
203
+ time.sleep(5)
204
+
205
+ logger.warning("Connecting to WANDB...")
206
+ wandb.init(
207
+ project=os.environ.get("WANDB_PROJECT", "causalvideovae"),
208
+ config=args,
209
+ name=get_exp_name(args)
210
+ )
211
+ dist.barrier()
212
+
213
+ # Load generator model
214
+ if args.pretrained_model_name_or_path is not None:
215
+ if rank == 0:
216
+ logger.warning(
217
+ f"You are loading a checkpoint from `{args.pretrained_model_name_or_path}`."
218
+ )
219
+ model = Refiner.from_pretrained(
220
+ args.pretrained_model_name_or_path, ignore_mismatched_sizes=False
221
+ )
222
+ elif args.model_config is not None:
223
+ if rank == 0:
224
+ logger.warning(f"Model will be inited randomly.")
225
+ model = Refiner.from_config(args.model_config)
226
+ else:
227
+ raise Exception(
228
+ "You should set either `--pretrained_model_name_or_path` or `--model_config`"
229
+ )
230
+
231
+ # Load discriminator model
232
+ disc_cls = resolve_str_to_obj(args.disc_cls, append=False)
233
+ logger.warning(f"disc_class: {args.disc_cls} perceptual_weight: {args.perceptual_weight} loss_type: {args.loss_type}")
234
+ disc = disc_cls(
235
+ disc_start=args.disc_start,
236
+ disc_weight=args.disc_weight,
237
+ logvar_init=args.logvar_init,
238
+ perceptual_weight=args.perceptual_weight,
239
+ loss_type=args.loss_type
240
+ )
241
+
242
+ # DDP
243
+ model = model.to(rank)
244
+ vae = CausalVAEModel.from_pretrained(args.vae_path, ignore_mismatched_sizes=False)
245
+ vae.requires_grad_(False)
246
+ vae = vae.to(rank).to(torch.bfloat16)
247
+ model = DDP(
248
+ model, device_ids=[rank], find_unused_parameters=args.find_unused_parameters
249
+ )
250
+ disc = disc.to(rank)
251
+ disc = DDP(
252
+ disc, device_ids=[rank], find_unused_parameters=args.find_unused_parameters
253
+ )
254
+
255
+ dataset = VideoDataset(
256
+ args.video_path,
257
+ sequence_length=args.num_frames,
258
+ resolution=args.resolution,
259
+ sample_rate=args.sample_rate,
260
+ dynamic_sample=args.dynamic_sample,
261
+ )
262
+ ddp_sampler = DistributedSampler(dataset)
263
+ dataloader = DataLoader(
264
+ dataset, batch_size=args.batch_size, sampler=ddp_sampler, pin_memory=True, num_workers=args.dataset_num_worker
265
+ )
266
+
267
+ val_dataset = RealVideoDataset(
268
+ real_video_dir=args.eval_video_path,
269
+ num_frames=args.eval_num_frames,
270
+ sample_rate=args.eval_sample_rate,
271
+ crop_size=args.eval_resolution,
272
+ resolution=args.eval_resolution,
273
+ )
274
+ indices = range(args.eval_subset_size)
275
+ val_dataset = Subset(val_dataset, indices=indices)
276
+ val_sampler = DistributedSampler(val_dataset)
277
+ val_dataloader = DataLoader(val_dataset, batch_size=args.eval_batch_size, sampler=val_sampler, pin_memory=True)
278
+
279
+
280
+
281
+ # Optimizer
282
+ modules_to_train = [module for module in model.module.get_decoder()]
283
+ if not args.freeze_encoder:
284
+ modules_to_train += [module for module in model.module.get_encoder()]
285
+ else:
286
+ for module in model.module.get_encoder():
287
+ module.eval()
288
+ module.requires_grad_(False)
289
+ logger.warning("Encoder is freezed!")
290
+
291
+ parameters_to_train = []
292
+ for module in modules_to_train:
293
+ parameters_to_train += module.parameters()
294
+
295
+ gen_optimizer = torch.optim.Adam(parameters_to_train, lr=args.lr)
296
+ disc_optimizer = torch.optim.Adam(
297
+ disc.module.discriminator.parameters(), lr=args.lr
298
+ )
299
+
300
+ # AMP scaler
301
+ scaler = torch.cuda.amp.GradScaler()
302
+ precision = torch.bfloat16
303
+ if args.mix_precision == "fp16":
304
+ precision = torch.float16
305
+ elif args.mix_precision == "fp32":
306
+ precision = torch.float32
307
+
308
+ # Load from checkpoint
309
+ start_epoch = 0
310
+ start_batch_idx = 0
311
+ if args.resume_from_checkpoint:
312
+ if not os.path.isfile(args.resume_from_checkpoint):
313
+ raise Exception(
314
+ f"Make sure `{args.resume_from_checkpoint}` is a ckpt file."
315
+ )
316
+ checkpoint = torch.load(args.resume_from_checkpoint, map_location="cpu")
317
+
318
+ if "ema_state_dict" in checkpoint and len(checkpoint['ema_state_dict']) > 0 and os.environ.get("NOT_USE_EMA_MODEL", 0) == 0:
319
+ logger.info("Load from EMA state dict! If you want to load from original state dict, you should set NOT_USE_EMA_MODEL=1.")
320
+ sd = checkpoint["ema_state_dict"]
321
+ sd = {key.replace("module.", ""): value for key, value in sd.items()}
322
+ model.module.load_state_dict(sd, strict=True)
323
+ else:
324
+ if "gen_model" in sd["state_dict"]:
325
+ sd = sd["state_dict"]["gen_model"]
326
+ else:
327
+ sd = sd["state_dict"]
328
+ model.module.load_state_dict(sd)
329
+ disc.module.load_state_dict(checkpoint["state_dict"]["dics_model"], strict=False)
330
+ if not args.not_resume_training_process:
331
+ scaler.load_state_dict(checkpoint["scaler_state"])
332
+ gen_optimizer.load_state_dict(checkpoint["optimizer_state"]["gen_optimizer"])
333
+ disc_optimizer.load_state_dict(checkpoint["optimizer_state"]["disc_optimizer"])
334
+ start_epoch = checkpoint["epoch"]
335
+ start_batch_idx = checkpoint.get("batch_idx", 0)
336
+ logger.info(
337
+ f"Checkpoint loaded from {args.resume_from_checkpoint}, starting from epoch {start_epoch} batch {start_batch_idx}"
338
+ )
339
+ else:
340
+ logger.warning(
341
+ f"Checkpoint loaded from {args.resume_from_checkpoint}, starting from epoch {start_epoch} batch {start_batch_idx}. But training process is not resumed."
342
+ )
343
+
344
+ if args.ema:
345
+ logger.warning(f"Start with EMA. EMA decay = {args.ema_decay}.")
346
+ ema = EMA(model, args.ema_decay)
347
+ ema.register()
348
+
349
+ # Training loop
350
+ logger.info("Prepared!")
351
+ dist.barrier()
352
+ if rank == 0:
353
+ logger.info(f"=== Model Params ===")
354
+ logger.info(f"Generator:\t\t{total_params(model.module)}M")
355
+ logger.info(f"\t- Encoder:\t{total_params(model.module.encoder):d}M")
356
+ logger.info(f"\t- Decoder:\t{total_params(model.module.decoder):d}M")
357
+ logger.info(f"Discriminator:\t{total_params(disc.module):d}M")
358
+ logger.info(f"===========")
359
+ logger.info(f"Precision is set to: {args.mix_precision}!")
360
+ logger.info("Start training!")
361
+
362
+ # Training Bar
363
+ bar_desc = ""
364
+ bar = None
365
+ if rank == 0:
366
+ max_steps = (
367
+ args.epochs * len(dataloader) if args.max_steps is None else args.max_steps
368
+ )
369
+ bar = tqdm.tqdm(total=max_steps, desc=bar_desc.format(current_epoch=0, loss=0))
370
+ bar_desc = "Epoch: {current_epoch}, Loss: {loss}"
371
+ logger.warning("Training Details: ")
372
+ logger.warning(f" Max steps: {max_steps}")
373
+ logger.warning(f" Dataset Samples: {len(dataloader)}")
374
+ logger.warning(
375
+ f" Total Batch Size: {args.batch_size} * {os.environ['WORLD_SIZE']}"
376
+ )
377
+ dist.barrier()
378
+
379
+ # Training Loop
380
+ num_epochs = args.epochs
381
+ current_step = 1
382
+
383
+ def update_bar(bar):
384
+ if rank == 0:
385
+ bar.desc = bar_desc.format(current_epoch=epoch, loss=f"-")
386
+ bar.update()
387
+
388
+ for epoch in range(num_epochs):
389
+ set_train(modules_to_train)
390
+ ddp_sampler.set_epoch(epoch) # Shuffle data at every epoch
391
+ for batch_idx, batch in enumerate(dataloader):
392
+
393
+ if epoch <= start_epoch and batch_idx < start_batch_idx:
394
+ update_bar(bar)
395
+ current_step += 1
396
+ continue
397
+
398
+ inputs = batch["video"].to(rank)
399
+ with torch.no_grad():
400
+ with torch.cuda.amp.autocast(dtype=precision):
401
+ latents = vae.encode(inputs).sample()
402
+ video_recon = vae.decode(latents)
403
+ if (
404
+ current_step % 2 == 1
405
+ and current_step >= disc.module.discriminator_iter_start
406
+ ):
407
+ set_modules_requires_grad(modules_to_train, False)
408
+ step_gen = False
409
+ step_dis = True
410
+ else:
411
+ set_modules_requires_grad(modules_to_train, True)
412
+ step_gen = True
413
+ step_dis = False
414
+
415
+ assert (
416
+ step_gen or step_dis
417
+ ), "You should backward either Gen or Dis in a step."
418
+
419
+ with torch.cuda.amp.autocast(dtype=precision):
420
+ outputs = model(video_recon)
421
+
422
+ # Generator Step
423
+ if step_gen:
424
+ with torch.cuda.amp.autocast(dtype=precision):
425
+ g_loss, g_log = disc(
426
+ inputs,
427
+ outputs,
428
+ optimizer_idx=0,
429
+ global_step=current_step,
430
+ last_layer=model.module.get_last_layer(),
431
+ split="train",
432
+ )
433
+ gen_optimizer.zero_grad()
434
+ scaler.scale(g_loss).backward()
435
+ scaler.step(gen_optimizer)
436
+ scaler.update()
437
+ if args.ema:
438
+ ema.update()
439
+ if rank == 0 and current_step % args.log_steps == 0:
440
+ wandb.log({"train/generator_loss": g_loss.item()}, step=current_step)
441
+
442
+ # Discriminator Step
443
+ if step_dis:
444
+ with torch.cuda.amp.autocast(dtype=precision):
445
+ d_loss, d_log = disc(
446
+ inputs,
447
+ outputs,
448
+ optimizer_idx=1,
449
+ global_step=current_step,
450
+ last_layer=None,
451
+ split="train",
452
+ )
453
+ disc_optimizer.zero_grad()
454
+ scaler.scale(d_loss).backward()
455
+ scaler.step(disc_optimizer)
456
+ scaler.update()
457
+ if rank == 0 and current_step % args.log_steps == 0:
458
+ wandb.log({"train/discriminator_loss": d_loss.item()}, step=current_step)
459
+
460
+ def valid_model(model, vae, name=""):
461
+ set_eval(modules_to_train)
462
+ psnr_list, lpips_list, video_log = valid(rank, model, vae, val_dataloader, precision, args)
463
+ valid_psnr, valid_lpips, valid_video_log = gather_valid_result(psnr_list, lpips_list, video_log, rank, dist.get_world_size())
464
+ if rank == 0:
465
+ name = "_" + name if name != "" else name
466
+ wandb.log({f"val{name}/recon": wandb.Video(np.array(valid_video_log), fps=10)}, step=current_step)
467
+ wandb.log({f"val{name}/psnr": valid_psnr}, step=current_step)
468
+ wandb.log({f"val{name}/lpips": valid_lpips}, step=current_step)
469
+ logger.info(f"{name} Validation done.")
470
+
471
+ if current_step % args.eval_steps == 0 or current_step == 1:
472
+ if rank == 0:
473
+ logger.info("Starting validation...")
474
+ valid_model(model, vae)
475
+ if args.ema:
476
+ ema.apply_shadow()
477
+ valid_model(model, vae, "ema")
478
+ ema.restore()
479
+
480
+ # Checkpoint
481
+ if current_step % args.save_ckpt_step == 0 and rank == 0:
482
+ file_path = save_checkpoint(
483
+ epoch,
484
+ batch_idx,
485
+ {
486
+ "gen_optimizer": gen_optimizer.state_dict(),
487
+ "disc_optimizer": disc_optimizer.state_dict(),
488
+ },
489
+ {
490
+ "gen_model": model.module.state_dict(),
491
+ "dics_model": disc.module.state_dict(),
492
+ },
493
+ scaler.state_dict(),
494
+ ckpt_dir,
495
+ f"checkpoint-{current_step}.ckpt",
496
+ ema_state_dict=ema.shadow if args.ema else {}
497
+ )
498
+ logger.info(f"Checkpoint has been saved to `{file_path}`.")
499
+
500
+ # Update step
501
+ update_bar(bar)
502
+ current_step += 1
503
+
504
+ dist.destroy_process_group()
505
+
506
+
507
+ def main():
508
+ parser = argparse.ArgumentParser(description="Distributed Training")
509
+ # Exp setting
510
+ parser.add_argument(
511
+ "--exp_name", type=str, default="test", help="number of epochs to train"
512
+ )
513
+ parser.add_argument("--seed", type=int, default=1234, help="seed")
514
+ # Training setting
515
+ parser.add_argument(
516
+ "--epochs", type=int, default=10, help="number of epochs to train"
517
+ )
518
+ parser.add_argument(
519
+ "--max_steps", type=int, default=None, help="number of epochs to train"
520
+ )
521
+ parser.add_argument("--save_ckpt_step", type=int, default=1000, help="")
522
+ parser.add_argument("--ckpt_dir", type=str, default="./results/", help="")
523
+ parser.add_argument(
524
+ "--batch_size", type=int, default=1, help="batch size for training"
525
+ )
526
+ parser.add_argument("--lr", type=float, default=1e-5, help="learning rate")
527
+ parser.add_argument("--log_steps", type=int, default=5, help="log steps")
528
+ parser.add_argument("--freeze_encoder", action="store_true", help="")
529
+
530
+ # Data
531
+ parser.add_argument("--video_path", type=str, default=None, help="")
532
+ parser.add_argument("--num_frames", type=int, default=17, help="")
533
+ parser.add_argument("--resolution", type=int, default=512, help="")
534
+ parser.add_argument("--sample_rate", type=int, default=1, help="")
535
+ parser.add_argument("--dynamic_sample", type=bool, default=False, help="")
536
+ # Generator model
537
+ parser.add_argument("--find_unused_parameters", action="store_true", help="")
538
+ parser.add_argument(
539
+ "--pretrained_model_name_or_path", type=str, default=None, help=""
540
+ )
541
+ parser.add_argument(
542
+ "--vae_path", type=str, default=None, help=""
543
+ )
544
+ parser.add_argument("--resume_from_checkpoint", type=str, default=None, help="")
545
+ parser.add_argument("--not_resume_training_process", action="store_true", help="")
546
+ parser.add_argument("--model_config", type=str, default=None, help="")
547
+ parser.add_argument(
548
+ "--mix_precision",
549
+ type=str,
550
+ default="bf16",
551
+ choices=["fp16", "bf16", "fp32"],
552
+ help="precision for training",
553
+ )
554
+
555
+ # Discriminator Model
556
+ parser.add_argument("--load_disc_from_checkpoint", type=str, default=None, help="")
557
+ parser.add_argument(
558
+ "--disc_cls",
559
+ type=str,
560
+ default="causalvideovae.model.losses.LPIPSWithDiscriminator3D",
561
+ help="",
562
+ )
563
+ parser.add_argument("--disc_start", type=int, default=5, help="")
564
+ parser.add_argument("--disc_weight", type=float, default=0.5, help="")
565
+ parser.add_argument("--kl_weight", type=float, default=1e-06, help="")
566
+ parser.add_argument("--perceptual_weight", type=float, default=1.0, help="")
567
+ parser.add_argument("--loss_type", type=str, default="l1", help="")
568
+ parser.add_argument("--logvar_init", type=float, default=0.0, help="")
569
+
570
+ # Validation
571
+ parser.add_argument("--eval_steps", type=int, default=1000, help="")
572
+ parser.add_argument("--eval_video_path", type=str, default=None, help="")
573
+ parser.add_argument("--eval_num_frames", type=int, default=17, help="")
574
+ parser.add_argument("--eval_resolution", type=int, default=256, help="")
575
+ parser.add_argument("--eval_sample_rate", type=int, default=1, help="")
576
+ parser.add_argument("--eval_batch_size", type=int, default=8, help="")
577
+ parser.add_argument("--eval_subset_size", type=int, default=50, help="")
578
+ parser.add_argument("--eval_num_video_log", type=int, default=2, help="")
579
+ parser.add_argument("--eval_lpips", action="store_true", help="")
580
+
581
+ # Dataset
582
+ parser.add_argument("--dataset_num_worker", type=int, default=16, help="")
583
+
584
+ # EMA
585
+ parser.add_argument("--ema", action="store_true", help="")
586
+ parser.add_argument("--ema_decay", type=float, default=0.999, help="")
587
+
588
+ args = parser.parse_args()
589
+
590
+ set_random_seed(args.seed)
591
+ train(args)
592
+
593
+
594
+ if __name__ == "__main__":
595
+ main()