datnguyentien204 commited on
Commit
3894c45
1 Parent(s): 48f6835

Upload 38 files

Browse files
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright 2021 Vislab/Ambarella
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CoMoGAN: Continuous Model-guided Image-to-Image Translation
2
+ Official repository.
3
+
4
+ ## Paper
5
+
6
+ <img src="teaser.png" alt="CoMoGAN" width="500" />
7
+ <img src="sample_result.gif" alt="CoMoGAN" width="500" />
8
+
9
+ CoMoGAN: continuous model-guided image-to-image translation \[[arXiv](http://arxiv.org/abs/2103.06879)\] | \[[supp](http://team.inria.fr/rits/files/2021/05/2021-comogan_supp.pdf)\] | \[[teaser](https://www.youtube.com/watch?v=x9fpJNPZgws)\] \
10
+ [Fabio Pizzati](https://fabvio.github.io/), [Pietro Cerri](https://scholar.google.fr/citations?user=MEidJHwAAAAJ), [Raoul de Charette](https://team.inria.fr/rits/membres/raoul-de-charette/)
11
+ Inria, Vislab Ambarella. CVPR'21 (**oral**)
12
+
13
+ If you find our work useful, please cite:
14
+ ```
15
+ @inproceedings{pizzati2021comogan,
16
+ title={{CoMoGAN}: continuous model-guided image-to-image translation},
17
+ author={Pizzati, Fabio and Cerri, Pietro and de Charette, Raoul},
18
+ booktitle={CVPR},
19
+ year={2021}
20
+ }
21
+ ```
22
+
23
+ ## Prerequisites
24
+ Tested with:
25
+ * Python 3.7
26
+ * Pytorch 1.7.1
27
+ * CUDA 11.0
28
+ * Pytorch Lightning 1.1.8
29
+ * waymo_open_dataset 1.3.0
30
+
31
+
32
+ ## Preparation
33
+ The repository contains training and inference code for CoMo-MUNIT training on waymo open dataset. In the paper, we refer to this experiment as Day2Timelapse. All the models have been trained on a 32GB Tesla V100 GPU. We also provide a mixed precision training which should fit smaller GPUs as well (a usual training takes ~9GB).
34
+
35
+
36
+ ### Environment setup
37
+ We advise the creation of a new conda environment including all necessary packages. The repository includes a requirements file. Please create and activate the new environment with
38
+ ```
39
+ conda env create -f requirements.yml
40
+ conda activate comogan
41
+ ```
42
+
43
+ ### Dataset preparation
44
+ First, download the Waymo Open Dataset from [the official website](https://waymo.com/open/). The dataset is organized in `.tfrecord` files, which we preprocess and split depending on metadata annotations on time of day.
45
+ Once you downloaded the dataset, you should run the `dump_waymo.py` script. It will read and unpack the `.tfrecord` files, also resizing the images for training. Please run
46
+
47
+ ```
48
+ python scripts/dump_waymo.py --load_path path/of/waymo/open/training --save_path /path/of/extracted/training/images
49
+ python scripts/dump_waymo.py --load_path path/of/waymo/open/validation --save_path /path/of/extracted/validation/images
50
+ ```
51
+
52
+ Running those commands should result in a similar directory structure:
53
+
54
+ ```
55
+ root
56
+ training
57
+ Day
58
+ seq_code_0_im_code_0.png
59
+ seq_code_0_im_code_1.png
60
+ ...
61
+ seq_code_1_im_code_0.png
62
+ ...
63
+ Dawn/Dusk
64
+ ...
65
+ Night
66
+ ...
67
+ validation
68
+ Day
69
+ ...
70
+ Dawn/Dusk
71
+ ...
72
+ Night
73
+ ...
74
+ ```
75
+
76
+ ## Pretrained weights
77
+ We release a pretrained set of weights to allow reproducibility of our results. The weights are downloadable from [here](https://www.rocq.inria.fr/rits_files/computer-vision/comogan/logs_pretrained.tar.gz). Once downloaded, unpack the file in the root of the project and test them with the inference notebook.
78
+
79
+ # Training
80
+ The training routine of CoMoGAN is mainly based on the CycleGAN codebase, available with details in the official repository.
81
+
82
+ To launch a default training, run
83
+ ```
84
+ python train.py --path_data path/to/waymo/training/dir --gpus 0
85
+ ```
86
+ You can choose on which GPUs to train with the `--gpus` flag. Multi-GPU is not deeply tested but it should be managed internally by Pytorch Lightning. Typically, a full training requires 13GB+ of GPU memory unless mixed precision is set. If you have a smaller GPU, please run
87
+
88
+ ```
89
+ python train.py --path_data path/to/waymo/training/dir --gpus 0 --mixed_precision
90
+ ```
91
+ Please note that performances on mixed precision trainings are evaluated only qualitatively.
92
+
93
+ ### Experiment organization
94
+ In the training routine, an unique ID will be assigned to every training. All experiments will be saved in the `logs` folder, which is structured in this way:
95
+ ```
96
+ logs/
97
+ train_ID_0
98
+ tensorboard/default/version_0
99
+ checkpoints
100
+ model_35000.pth
101
+ ...
102
+ hparams.yaml
103
+ tb_log_file
104
+ train_ID_1
105
+ ...
106
+ ```
107
+ In the checkpoints folder, all the intermediate checkpoints will be stored. `hparams.yaml` contains all the hyperparameters for a given run. You can launch a `tensorboard --logdir train_ID` instance on training directories to visualize intermediate outputs and loss functions.
108
+
109
+ To resume a previously stopped training, running
110
+ ```
111
+ python train.py --id train_ID --path_data path/to/waymo/training/dir --gpus 0
112
+ ```
113
+ will load the latest checkpoint from a given train ID checkpoints directory.
114
+
115
+ ### Extending the code
116
+ #### Command line arguments
117
+ We expose command line arguments to encourage code reusability and adaptability to other datasets or models. Right now, the available options thought for extensions are:
118
+
119
+ * `--debug`: Disables logging and experiment saving. Useful for testing code modifications.
120
+ * `--model`: Loads a CoMoGAN model. By default, it loads CoMo-MUNIT (code is in `networks` folder)
121
+ * `--data_importer`: Loads data from a dataset. By default, it loads waymo for the day2timelapse experiment (code is in `data` folder).
122
+ * `--learning_rate`: Modifies learning rate, default value for CoMo-MUNIT is `1e-4`.
123
+ * `--scheduler_policy`: You can choose among `linear` os `step` policy, taken respectively from CycleGAN and MUNIT training routines. Default is `step`.
124
+ * `--decay_iters_step`: For `step` policy, how many iterations before reducing learning rate
125
+ * `--decay_step_gamma`: Regulates how much to reduce the learning rate
126
+ * `--seed`: Random seed initialization
127
+
128
+ The codebase have been rewritten almost from scratch after CVPR acceptance and optimized for reproducibility, hence the seed provided could give slightly different results from the ones reported in the paper.
129
+
130
+ Changing model and dataset requires extending the `networks/base_model.py` and `data/base_dataset.py` class, respectively. Please look into CycleGAN repository for further instructions.
131
+
132
+ #### Model, dataset and other options
133
+ Specific hyperparameters for different models, datasets or options not changing with high frequency are embedded in `munch` dictionaries in the relative classes. For instance, in `networks/comomunit_model.py` you can find all customizable options for CoMo-MUNIT. The same is valid for `data/day2timelapse_dataset.py`. The `options` folder includes additional options on checkpoint saving intervals and logging.
134
+
135
+ ## Inference
136
+ Once you trained a model, you can use the `infer.ipynb` notebook to visualize translation results. After having launched a notebook instance, you will be required to select the `train_id` of the experiment. The notebook is documented and it provides widgets for sequence, checkpoint and translation selection.
137
+
138
+ You can also use the `translate.py` script to translate all the images inside a directory or a sequence of images to another target directory.
139
+ ```
140
+ python scripts/translate.py --load_path path/to/waymo/validation/day/dir --save_path path/to/saving/dir --phi 3.14
141
+ ```
142
+ Will load image from the indicated path before translating it to a night style image due to the phi set to 3.14.
143
+ * `--phi`: (𝜙) is the angle of the sun with a value between [0,2𝜋], which maps to a sun elevation ∈ [+30◦,−40◦]
144
+ * `--sequence`: if you want to use only certain images, you can specify a name or a keyword contained in the image's name like `--sequence segment-10203656353524179475`
145
+ * `--checkpoint`: if your folder logs contains more than one train_ID or if you want to select an older checkpoint, you should indicate the path to the checkpoint contained in the folder with the train_ID that you want like `--checkpoint logs/train_ID_0/tensorboard/default/version_0/checkpoints/model_35000.pth`
146
+
147
+ ## Docker
148
+ You will find a Dockerfile based on the nvidia/cuda:11.0.3-base-ubuntu18.04 image with all the dependencies that you need to run and test the code.
149
+ To build it and to run it :
150
+ ```
151
+ docker build -t notebook/comogan:1.0 .
152
+ docker run -it -v /path/to/your/local/datasets/:/datasets -p 8888:8888 --gpus '"device=0"' notebook/comogan:1.0
153
+ ```
154
+ * `--gpus`: gives you the possibility to only parse the GPU that you want to use, by default, all the available GPUs are parsed.
155
+ * `-v`: mount the local directory that contained your dataset
156
+ * `-p`: this option is only used for the `infer.ipynb` notebook. If you run the notebook on a remote server, you should also use this command to tunnel the output to your computer `ssh [email protected] -NL 8888:127.0.0.1:8888`
a.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Folder PATH listing for volume Data2
2
+ Volume serial number is 6AB0-11CA
3
+ E:.
4
+ +---.idea
5
+ � +---inspectionProfiles
6
+ +---data
7
+ � +---__pycache__
8
+ +---imgs_test
9
+ +---logs
10
+ � +---pretrained
11
+ � +---tensorboard
12
+ � +---default
13
+ � +---version_0
14
+ � +---checkpoints
15
+ +---networks
16
+ � +---backbones
17
+ � +---__pycache__
18
+ +---options
19
+ +---res
20
+ +---scripts
21
+ +---util
data/__init__.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ __init__.py
3
+ Enables dynamic loading of datasets, depending on an argument.
4
+ """
5
+ import importlib
6
+ import torch.utils.data
7
+ from data.base_dataset import BaseDataset
8
+
9
+
10
+ def find_dataset_using_name(dataset_name):
11
+ """Import the module "data/[dataset_name]_dataset.py".
12
+
13
+ In the file, the class called DatasetNameDataset() will
14
+ be instantiated. It has to be a subclass of BaseDataset,
15
+ and it is case-insensitive.
16
+ """
17
+ dataset_filename = "data." + dataset_name + "_dataset"
18
+ datasetlib = importlib.import_module(dataset_filename)
19
+
20
+ dataset = None
21
+ target_dataset_name = dataset_name.replace('_', '') + 'dataset'
22
+ for name, cls in datasetlib.__dict__.items():
23
+ if name.lower() == target_dataset_name.lower() \
24
+ and issubclass(cls, BaseDataset):
25
+ dataset = cls
26
+
27
+ if dataset is None:
28
+ raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name))
29
+
30
+ return dataset
31
+
32
+
33
+ def create_dataset(opt):
34
+ """Create a dataset given the option.
35
+
36
+ This function wraps the class CustomDatasetDataLoader.
37
+ This is the main interface between this package and 'train.py'/'test.py'
38
+
39
+ Example:
40
+ >>> from data import create_dataset
41
+ >>> dataset = create_dataset(opt)
42
+ """
43
+ data_loader = CustomDatasetDataLoader(opt)
44
+ dataset = data_loader.load_data()
45
+ return dataset
46
+
47
+ def get_dataset_options(dataset_name):
48
+ dataset_filename = "data." + dataset_name + "_dataset"
49
+ datalib = importlib.import_module(dataset_filename)
50
+ for name, cls in datalib.__dict__.items():
51
+ if name.lower() == 'datasetoptions':
52
+ return cls
53
+ return None
54
+
55
+ class CustomDatasetDataLoader():
56
+ """Wrapper class of Dataset class that performs multi-threaded data loading"""
57
+
58
+ def __init__(self, opt):
59
+ """Initialize this class
60
+
61
+ Step 1: create a dataset instance given the name [dataset_mode]
62
+ Step 2: create a multi-threaded data loader.
63
+ """
64
+ self.opt = opt
65
+ dataset_class = find_dataset_using_name(opt.dataset_mode)
66
+ self.dataset = dataset_class(opt)
67
+ self.dataloader = torch.utils.data.DataLoader(
68
+ self.dataset,
69
+ batch_size=opt.batch_size,
70
+ shuffle=not opt.serial_batches,
71
+ num_workers=int(opt.num_threads))
72
+
73
+ def load_data(self):
74
+ return self
75
+
76
+ def __len__(self):
77
+ """Return the number of data in the dataset"""
78
+ return min(len(self.dataset), self.opt.max_dataset_size)
79
+
80
+ def __iter__(self):
81
+ """Return a batch of data"""
82
+ for i, data in enumerate(self.dataloader):
83
+ if i * self.opt.batch_size >= self.opt.max_dataset_size:
84
+ break
85
+ yield data
data/__pycache__/__init__.cpython-37.pyc ADDED
Binary file (3.16 kB). View file
 
data/__pycache__/base_dataset.cpython-37.pyc ADDED
Binary file (5.18 kB). View file
 
data/base_dataset.py ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ base_dataset.py:
3
+ All datasets are a subclass of BaseDataset and implement abstract methods.
4
+ Includes augmentation strategies which can be used at sampling time.
5
+ """
6
+ import random
7
+ import numpy as np
8
+ import torch.utils.data as data
9
+ from PIL import Image
10
+ import torchvision.transforms as transforms
11
+ from abc import ABC, abstractmethod
12
+ import logging
13
+
14
+ logging.basicConfig(level=logging.WARNING)
15
+ logger = logging.getLogger(__name__)
16
+
17
+ class BaseDataset(data.Dataset, ABC):
18
+ """This class is an abstract base class (ABC) for datasets.
19
+
20
+ To create a subclass, you need to implement the following four functions:
21
+ -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
22
+ -- <__len__>: return the size of dataset.
23
+ -- <__getitem__>: get a data point.
24
+ """
25
+
26
+ def __init__(self, opt):
27
+ """Initialize the class; save the options in the class
28
+
29
+ Parameters:
30
+ opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
31
+ """
32
+ self.opt = opt
33
+ self.root = opt.dataroot
34
+
35
+ @abstractmethod
36
+ def __len__(self):
37
+ """Return the total number of images in the dataset."""
38
+ return 0
39
+
40
+ @abstractmethod
41
+ def __getitem__(self, index):
42
+ """Return a data point and its metadata information.
43
+
44
+ Parameters:
45
+ index - - a random integer for data indexing
46
+
47
+ Returns:
48
+ a dictionary of data with their names. It ususally contains the data itself and its metadata information.
49
+ """
50
+ pass
51
+
52
+
53
+ def get_params(opt, size):
54
+ w, h = size
55
+ new_h = h
56
+ new_w = w
57
+ if opt.preprocess == 'resize_and_crop':
58
+ new_h = new_w = opt.load_size
59
+ elif opt.preprocess == 'scale_width_and_crop':
60
+ new_w = opt.load_size
61
+ new_h = opt.load_size * h // w
62
+
63
+ x = random.randint(0, np.maximum(0, new_w - opt.crop_size))
64
+ y = random.randint(0, np.maximum(0, new_h - opt.crop_size))
65
+ flip = random.random() > 0.5
66
+
67
+ return {'crop_pos': (x, y), 'flip': flip}
68
+
69
+
70
+ def get_transform(opt, params=None, grayscale=False, method=Image.BICUBIC, convert=True):
71
+ transform_list = []
72
+ if grayscale:
73
+ transform_list.append(transforms.Grayscale(1))
74
+ if 'resize' in opt.preprocess:
75
+ osize = [opt.load_size, opt.load_size]
76
+ transform_list.append(transforms.Resize(osize, method))
77
+ elif 'scale_width' in opt.preprocess:
78
+ transform_list.append(transforms.Lambda(lambda img: __scale_width(img, opt.load_size, opt.crop_size, method)))
79
+
80
+ if 'crop' in opt.preprocess:
81
+ if params is None:
82
+ transform_list.append(transforms.RandomCrop(opt.crop_size))
83
+ else:
84
+ transform_list.append(transforms.Lambda(lambda img: __crop(img, params['crop_pos'], opt.crop_size)))
85
+
86
+ if opt.preprocess == 'none':
87
+ transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base=1, method=method)))
88
+
89
+ if not opt.no_flip:
90
+ if params is None:
91
+ transform_list.append(transforms.RandomHorizontalFlip())
92
+ elif params['flip']:
93
+ transform_list.append(transforms.Lambda(lambda img: __flip(img, params['flip'])))
94
+
95
+ if convert:
96
+ transform_list += [transforms.ToTensor()]
97
+ if grayscale:
98
+ transform_list += [transforms.Normalize((0.5,), (0.5,))]
99
+ else:
100
+ transform_list += [transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
101
+ return transforms.Compose(transform_list)
102
+
103
+
104
+ def __make_power_2(img, base, method=Image.BICUBIC):
105
+ ow, oh = img.size
106
+ h = int(round(oh / base) * base)
107
+ w = int(round(ow / base) * base)
108
+ if h == oh and w == ow:
109
+ return img
110
+
111
+ __print_size_warning(ow, oh, w, h)
112
+ return img.resize((w, h), method)
113
+
114
+
115
+ def __scale_width(img, target_size, crop_size, method=Image.BICUBIC):
116
+ ow, oh = img.size
117
+ if ow == target_size and oh >= crop_size:
118
+ return img
119
+ w = target_size
120
+ h = int(max(target_size * oh / ow, crop_size))
121
+ return img.resize((w, h), method)
122
+
123
+
124
+ def __crop(img, pos, size):
125
+ ow, oh = img.size
126
+ x1, y1 = pos
127
+ tw = th = size
128
+ if (ow > tw or oh > th):
129
+ return img.crop((x1, y1, x1 + tw, y1 + th))
130
+ return img
131
+
132
+
133
+ def __flip(img, flip):
134
+ if flip:
135
+ return img.transpose(Image.FLIP_LEFT_RIGHT)
136
+ return img
137
+
138
+
139
+ def __print_size_warning(ow, oh, w, h):
140
+ """Print warning information about image size (only print once)"""
141
+ if not hasattr(__print_size_warning, 'has_printed'):
142
+ logger.warning(
143
+ f"The image size needs to be a multiple of 4. "
144
+ f"The loaded image size was ({ow}, {oh}), so it was adjusted to "
145
+ f"({w}, {h}). This adjustment will be done to all images "
146
+ f"whose sizes are not multiples of 4"
147
+ )
148
+ __print_size_warning.has_printed = True
data/day2timelapse_dataset.py ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ day2timelapse_dataset.py:
3
+ Dataset loader for day2timelapse. It loads images belonging to Waymo
4
+ Day/Dusk/Dawn/Night splits and it applies a tone mapping operator to
5
+ the "Day" ones in order to drive learning with CoMoGAN.
6
+ It has support for custom options in DatasetOptions.
7
+ """
8
+
9
+ import os.path
10
+
11
+ import numpy as np
12
+ import math
13
+ from data.base_dataset import BaseDataset, get_transform
14
+ from PIL import Image
15
+ import random
16
+ from torchvision.transforms import ToTensor
17
+ import torch
18
+ import munch
19
+
20
+
21
+ def DatasetOptions():
22
+ do = munch.Munch()
23
+ do.num_threads = 4
24
+ do.batch_size = 1
25
+ do.preprocess = 'none'
26
+ do.max_dataset_size = float('inf')
27
+ do.no_flip = False
28
+ do.serial_batches = False
29
+ return do
30
+
31
+
32
+ class Day2TimelapseDataset(BaseDataset):
33
+ """
34
+ This dataset class can load unaligned/unpaired datasets.
35
+
36
+ It requires two directories to host training images from domain A '/path/to/data/trainA'
37
+ and from domain B '/path/to/data/trainB' respectively.
38
+ You can train the model with the dataset flag '--dataroot /path/to/data'.
39
+ Similarly, you need to prepare two directories:
40
+ '/path/to/data/testA' and '/path/to/data/testB' during test time.
41
+ """
42
+
43
+ def __init__(self, opt):
44
+ """Initialize this dataset class.
45
+
46
+ Parameters:
47
+ opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
48
+ """
49
+ BaseDataset.__init__(self, opt)
50
+ self.dir_day = os.path.join(opt.dataroot, 'sunny', 'Day')
51
+ self.dir_dusk = os.path.join(opt.dataroot, 'sunny', 'Dawn', 'Dusk')
52
+ self.dir_night = os.path.join(opt.dataroot, 'sunny', 'Night')
53
+ self.A_paths = [os.path.join(self.dir_day, x) for x in os.listdir(self.dir_day)] # load images from '/path/to/data/trainA'
54
+ self.B_paths = [os.path.join(self.dir_dusk, x) for x in os.listdir(self.dir_dusk)] # load images from '/path/to/data/trainB'
55
+ self.B_paths += [os.path.join(self.dir_night, x) for x in os.listdir(self.dir_night)] # load images from '/path/to/data/trainB'
56
+
57
+ self.A_size = len(self.A_paths) # get the size of dataset A
58
+ self.B_size = len(self.B_paths) # get the size of dataset B
59
+ self.A_paths.sort()
60
+ self.B_paths.sort()
61
+ self.transform_A = get_transform(self.opt, grayscale=(opt.input_nc == 1), convert=False)
62
+ self.transform_B = get_transform(self.opt, grayscale=(opt.output_nc == 1), convert=False)
63
+
64
+ self.__tonemapping = torch.tensor(np.loadtxt('./data/daytime_model_lut.csv', delimiter=','),
65
+ dtype=torch.float32)
66
+
67
+ self.__xyz_matrix = torch.tensor([[0.5149, 0.3244, 0.1607],
68
+ [0.2654, 0.6704, 0.0642],
69
+ [0.0248, 0.1248, 0.8504]])
70
+
71
+ def __getitem__(self, index):
72
+ """Return a data point and its metadata information.
73
+
74
+ Parameters:
75
+ index (int) -- a random integer for data indexing
76
+
77
+ Returns a dictionary that contains A, B, A_paths and B_paths
78
+ A (tensor) -- an image in the input domain
79
+ B (tensor) -- its corresponding image in the target domain
80
+ A_paths (str) -- image paths
81
+ B_paths (str) -- image paths
82
+ """
83
+ A_path = self.A_paths[index % self.A_size] # make sure index is within then range
84
+ index_B = random.randint(0, self.B_size - 1)
85
+ B_path = self.B_paths[index_B]
86
+
87
+ A_img = Image.open(A_path).convert('RGB')
88
+ B_img = Image.open(B_path).convert('RGB')
89
+
90
+ # apply image transformation
91
+ A = self.transform_A(A_img)
92
+ B = self.transform_B(B_img)
93
+
94
+ # Define continuity normalization
95
+ A = ToTensor()(A)
96
+ B = ToTensor()(B)
97
+
98
+ phi = random.random() * 2 * math.pi
99
+ continuity_sin = math.sin(phi)
100
+ cos_phi = math.cos(phi)
101
+
102
+ A_cont = self.__apply_colormap(A, cos_phi, continuity_sin)
103
+
104
+ phi_prime = random.random() * 2 * math.pi
105
+ sin_phi_prime = math.sin(phi_prime)
106
+ cos_phi_prime = math.cos(phi_prime)
107
+
108
+ A_cont_compare = self.__apply_colormap(A, cos_phi_prime, sin_phi_prime)
109
+
110
+ # Normalization between -1 and 1
111
+ A = (A * 2) - 1
112
+ B = (B * 2) - 1
113
+ A_cont = (A_cont * 2) - 1
114
+ A_cont_compare = (A_cont_compare * 2) - 1
115
+
116
+
117
+ return {'A': A, 'B': B, 'A_cont': A_cont, 'A_paths': A_path, 'B_paths': B_path, 'cos_phi': float(cos_phi),
118
+ 'sin_phi': float(continuity_sin), 'sin_phi_prime': float(sin_phi_prime),
119
+ 'cos_phi_prime': float(cos_phi_prime), 'A_cont_compare': A_cont_compare, 'phi': phi,
120
+ 'phi_prime': phi_prime,}
121
+
122
+ def __len__(self):
123
+ """Return the total number of images in the dataset.
124
+
125
+ As we have two datasets with potentially different number of images,
126
+ we take a maximum of
127
+ """
128
+ return max(self.A_size, self.B_size)
129
+
130
+ def __apply_colormap(self, im, cos_phi, sin_phi, eps = 1e-8):
131
+ size_0, size_1, size_2 = im.size()
132
+ cos_phi_norm = 1 - (cos_phi + 1) / 2 # 0 in 0, 1 in pi
133
+ im_buf = im.permute(1, 2, 0).view(-1, 3)
134
+ im_buf = torch.matmul(im_buf, self.__xyz_matrix)
135
+
136
+ X = im_buf[:, 0] + eps
137
+ Y = im_buf[:, 1]
138
+ Z = im_buf[:, 2]
139
+
140
+ V = Y * (1.33 * (1 + (Y + Z) / X) - 1.68)
141
+
142
+ tmp_index_lower = int(cos_phi_norm * self.__tonemapping.size(0))
143
+
144
+ if tmp_index_lower < self.__tonemapping.size(0) - 1:
145
+ tmp_index_higher = tmp_index_lower + 1
146
+ else:
147
+ tmp_index_higher = tmp_index_lower
148
+ interp_index = cos_phi_norm * self.__tonemapping.size(0) - tmp_index_lower
149
+ try:
150
+ color_lower = self.__tonemapping[tmp_index_lower, :3]
151
+ except IndexError:
152
+ color_lower = self.__tonemapping[-2, :3]
153
+ try:
154
+ color_higher = self.__tonemapping[tmp_index_higher, :3]
155
+ except IndexError:
156
+ color_higher = self.__tonemapping[-2, :3]
157
+ color = color_lower * (1 - interp_index) + color_higher * interp_index
158
+
159
+
160
+ if sin_phi >= 0:
161
+ # red shift
162
+ corr = torch.tensor([0.1, 0, 0.1]) * sin_phi # old one was 0.03
163
+ if sin_phi < 0:
164
+ # purple shift
165
+ corr = torch.tensor([0.1, 0, 0]) * (- sin_phi)
166
+
167
+ color += corr
168
+ im_degree = V.unsqueeze(1) * torch.matmul(color, self.__xyz_matrix)
169
+ im_degree = torch.matmul(im_degree, self.__xyz_matrix.inverse()).view(size_1, size_2, size_0).permute(2, 0, 1)
170
+ im_final = im_degree * cos_phi_norm + im * (1 - cos_phi_norm) + corr.unsqueeze(-1).unsqueeze(-1).repeat(1, im_degree.size(1), im_degree.size(2))
171
+
172
+ im_final = im_final.clamp(0, 1)
173
+ return im_final
data/daytime_model_lut.csv ADDED
@@ -0,0 +1,550 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 0.459730327129364,0.5753538012504578,0.7307513356208801
2
+ 0.4594928026199341,0.5749730467796326,0.7301026582717896
3
+ 0.4592478573322296,0.5745759606361389,0.7294389009475708
4
+ 0.458999365568161,0.5741816163063049,0.7287785410881042
5
+ 0.458759605884552,0.573775589466095,0.7281114459037781
6
+ 0.45850908756256104,0.5733696222305298,0.7274336218833923
7
+ 0.4582521319389343,0.5729745030403137,0.7267529964447021
8
+ 0.4580022394657135,0.5725727081298828,0.7260730862617493
9
+ 0.4577447474002838,0.5721668601036072,0.7253925204277039
10
+ 0.4574819803237915,0.5717566609382629,0.7247171998023987
11
+ 0.4572221636772156,0.5713460445404053,0.7240318655967712
12
+ 0.4569593071937561,0.5709385275840759,0.723334014415741
13
+ 0.4566876292228699,0.5705302357673645,0.7226318717002869
14
+ 0.456419438123703,0.5701059699058533,0.7219248414039612
15
+ 0.45614975690841675,0.5696818232536316,0.7212276458740234
16
+ 0.45587342977523804,0.5692545175552368,0.7205167412757874
17
+ 0.4555920660495758,0.5688228011131287,0.7198204398155212
18
+ 0.45531001687049866,0.5684027075767517,0.7190941572189331
19
+ 0.45502665638923645,0.5679745674133301,0.7183746695518494
20
+ 0.4547405242919922,0.5675432682037354,0.7176563739776611
21
+ 0.4544476568698883,0.5671043395996094,0.7169240713119507
22
+ 0.4541616141796112,0.5666742324829102,0.7161986231803894
23
+ 0.45386913418769836,0.5662345290184021,0.7154660820960999
24
+ 0.45357248187065125,0.565789520740509,0.7147201895713806
25
+ 0.45327311754226685,0.565341591835022,0.713971734046936
26
+ 0.452972412109375,0.5648908019065857,0.7132329344749451
27
+ 0.45267778635025024,0.5644432902336121,0.7124903202056885
28
+ 0.45237043499946594,0.5639790892601013,0.7117367386817932
29
+ 0.45206570625305176,0.5635237693786621,0.7109693288803101
30
+ 0.45175883173942566,0.563062310218811,0.7101975083351135
31
+ 0.4514491856098175,0.5626062750816345,0.7094336152076721
32
+ 0.45112910866737366,0.5621491074562073,0.7086634635925293
33
+ 0.45080673694610596,0.5616856813430786,0.7078953981399536
34
+ 0.45049235224723816,0.5612045526504517,0.707120418548584
35
+ 0.45016390085220337,0.5607321262359619,0.7063286900520325
36
+ 0.44983530044555664,0.5602567791938782,0.7055374979972839
37
+ 0.4495112895965576,0.5597798228263855,0.7047412991523743
38
+ 0.4491833746433258,0.5593019723892212,0.7039420008659363
39
+ 0.44884371757507324,0.5588300824165344,0.7031435966491699
40
+ 0.4485161304473877,0.5583317279815674,0.7023458480834961
41
+ 0.4481812119483948,0.5578548312187195,0.7015211582183838
42
+ 0.44783902168273926,0.5573652386665344,0.7006900906562805
43
+ 0.44750285148620605,0.5568601489067078,0.6998854875564575
44
+ 0.44716599583625793,0.5563632845878601,0.6990556120872498
45
+ 0.4468209147453308,0.5558644533157349,0.6982278227806091
46
+ 0.4464694857597351,0.5553638935089111,0.6973857879638672
47
+ 0.44611290097236633,0.554855227470398,0.6965429782867432
48
+ 0.44575873017311096,0.5543461441993713,0.6956973671913147
49
+ 0.44539836049079895,0.5538337826728821,0.6948440670967102
50
+ 0.44503751397132874,0.5533230900764465,0.6939923763275146
51
+ 0.4446791708469391,0.5528026223182678,0.6931226849555969
52
+ 0.4443178176879883,0.5522757768630981,0.6922490000724792
53
+ 0.44394853711128235,0.5517544746398926,0.6913824081420898
54
+ 0.44358089566230774,0.551224946975708,0.6905091404914856
55
+ 0.44321173429489136,0.5507016181945801,0.6896212100982666
56
+ 0.44284260272979736,0.5501687526702881,0.6887242197990417
57
+ 0.4424705505371094,0.5496318340301514,0.687828540802002
58
+ 0.4420968294143677,0.5491050481796265,0.6869293451309204
59
+ 0.4417157471179962,0.5485673546791077,0.6860166192054749
60
+ 0.44132229685783386,0.5480145215988159,0.6851045489311218
61
+ 0.4409382939338684,0.5474622249603271,0.684185802936554
62
+ 0.4405543804168701,0.5469011664390564,0.6832549571990967
63
+ 0.44016018509864807,0.5463506579399109,0.6823207139968872
64
+ 0.43976613879203796,0.5457964539527893,0.6813837289810181
65
+ 0.439370721578598,0.5452314615249634,0.6804408431053162
66
+ 0.4389679431915283,0.5446683168411255,0.679487407207489
67
+ 0.43856877088546753,0.5441105961799622,0.6785308122634888
68
+ 0.43817123770713806,0.5435311794281006,0.6775607466697693
69
+ 0.43776628375053406,0.5429514646530151,0.6765934228897095
70
+ 0.4373634457588196,0.5423672795295715,0.6756237149238586
71
+ 0.4369603097438812,0.5417906641960144,0.6746310591697693
72
+ 0.43654337525367737,0.5412123203277588,0.6736356019973755
73
+ 0.43612799048423767,0.5406161546707153,0.6726331114768982
74
+ 0.43570569157600403,0.5400263667106628,0.6716299057006836
75
+ 0.43528664112091064,0.5394323468208313,0.670623779296875
76
+ 0.4348626732826233,0.5388368368148804,0.6695924401283264
77
+ 0.4344364404678345,0.5382305383682251,0.6685694456100464
78
+ 0.43401214480400085,0.5376153588294983,0.6675337553024292
79
+ 0.433585524559021,0.537000298500061,0.6664865016937256
80
+ 0.433156281709671,0.5363917350769043,0.6654470562934875
81
+ 0.43273061513900757,0.5357779860496521,0.6643723249435425
82
+ 0.4322899281978607,0.5351535081863403,0.6633078455924988
83
+ 0.431845486164093,0.5345269441604614,0.6622382998466492
84
+ 0.4313971996307373,0.5338884592056274,0.6611512899398804
85
+ 0.43095406889915466,0.5332503914833069,0.660057783126831
86
+ 0.43050533533096313,0.5326089262962341,0.6589553952217102
87
+ 0.4300559163093567,0.5319759845733643,0.6578448414802551
88
+ 0.42960599064826965,0.5313165187835693,0.6567374467849731
89
+ 0.42915093898773193,0.5306854844093323,0.655599057674408
90
+ 0.42869460582733154,0.5300273299217224,0.6544656157493591
91
+ 0.42823490500450134,0.5293539762496948,0.6533178687095642
92
+ 0.42778128385543823,0.5286882519721985,0.6521674394607544
93
+ 0.4273095428943634,0.5280155539512634,0.6510041952133179
94
+ 0.42683160305023193,0.5273396372795105,0.6498340368270874
95
+ 0.42636018991470337,0.5266631841659546,0.6486470699310303
96
+ 0.42588356137275696,0.5259905457496643,0.6474630832672119
97
+ 0.42541661858558655,0.5252941846847534,0.6462488770484924
98
+ 0.42492032051086426,0.5246042013168335,0.6450435519218445
99
+ 0.4244424104690552,0.5239027738571167,0.6438180804252625
100
+ 0.42396286129951477,0.5231947302818298,0.6425884366035461
101
+ 0.4234755039215088,0.5224864482879639,0.6413478255271912
102
+ 0.4229883551597595,0.5217775106430054,0.6400837302207947
103
+ 0.42249560356140137,0.5210569500923157,0.638817548751831
104
+ 0.4219900071620941,0.5203303694725037,0.6375637054443359
105
+ 0.4214840233325958,0.5195984840393066,0.6362555623054504
106
+ 0.420978307723999,0.518863320350647,0.6349571347236633
107
+ 0.42047688364982605,0.5181199908256531,0.6336472630500793
108
+ 0.4199632704257965,0.5173732042312622,0.6323349475860596
109
+ 0.4194590151309967,0.5166250467300415,0.6309912800788879
110
+ 0.4189481735229492,0.5158714056015015,0.6296461224555969
111
+ 0.41842934489250183,0.5150937438011169,0.6282913684844971
112
+ 0.41790279746055603,0.5143157243728638,0.6269145607948303
113
+ 0.41737210750579834,0.5135502219200134,0.6255266666412354
114
+ 0.41685158014297485,0.5127615928649902,0.624131977558136
115
+ 0.41631534695625305,0.5119665861129761,0.6227325201034546
116
+ 0.4157761037349701,0.5111815333366394,0.6212867498397827
117
+ 0.41524171829223633,0.5103685259819031,0.6198469400405884
118
+ 0.41469940543174744,0.5095604062080383,0.6184101700782776
119
+ 0.4141487181186676,0.5087331533432007,0.6169341802597046
120
+ 0.41360774636268616,0.5079103112220764,0.6154453158378601
121
+ 0.41305631399154663,0.5070822238922119,0.6139618754386902
122
+ 0.41250190138816833,0.5062423944473267,0.6124515533447266
123
+ 0.4119435250759125,0.5053935050964355,0.6109218001365662
124
+ 0.4113782048225403,0.5045426487922668,0.6093810796737671
125
+ 0.4108012318611145,0.5036604404449463,0.6078344583511353
126
+ 0.41021865606307983,0.5027989149093628,0.6062476634979248
127
+ 0.40964311361312866,0.5019211769104004,0.6046628355979919
128
+ 0.4090559482574463,0.5010314583778381,0.6030641794204712
129
+ 0.40848174691200256,0.5001277923583984,0.6014267206192017
130
+ 0.4078844487667084,0.4992227256298065,0.5997909903526306
131
+ 0.40730759501457214,0.49830612540245056,0.598132848739624
132
+ 0.40671131014823914,0.49738460779190063,0.5964401364326477
133
+ 0.40611180663108826,0.4964587688446045,0.5947580337524414
134
+ 0.40548941493034363,0.4955010712146759,0.5930367708206177
135
+ 0.40487435460090637,0.49454042315483093,0.5913069844245911
136
+ 0.40426012873649597,0.49358218908309937,0.5895541310310364
137
+ 0.40364205837249756,0.4926128387451172,0.5877792239189148
138
+ 0.403017520904541,0.49162688851356506,0.5859869718551636
139
+ 0.40238481760025024,0.4906262755393982,0.5841830968856812
140
+ 0.4017612040042877,0.4896088242530823,0.5823368430137634
141
+ 0.40111586451530457,0.4885888397693634,0.5804885625839233
142
+ 0.40046021342277527,0.48756441473960876,0.5786120891571045
143
+ 0.3997982442378998,0.4865255057811737,0.5767055153846741
144
+ 0.3991381525993347,0.48545724153518677,0.5747894644737244
145
+ 0.39847657084465027,0.4843839108943939,0.5728437900543213
146
+ 0.39780861139297485,0.483305960893631,0.5708560943603516
147
+ 0.3971324861049652,0.48220762610435486,0.5688844323158264
148
+ 0.39645910263061523,0.481110543012619,0.5668579339981079
149
+ 0.39577189087867737,0.47997036576271057,0.5648245215415955
150
+ 0.3950742483139038,0.4788309633731842,0.5627452731132507
151
+ 0.3943650722503662,0.47768348455429077,0.5606535077095032
152
+ 0.39365389943122864,0.47651612758636475,0.5585283637046814
153
+ 0.3929373323917389,0.4753277897834778,0.5563767552375793
154
+ 0.39221417903900146,0.47412094473838806,0.5542060732841492
155
+ 0.3914814591407776,0.4729042649269104,0.5519970059394836
156
+ 0.39075806736946106,0.4716648459434509,0.5497605204582214
157
+ 0.3900115489959717,0.47042933106422424,0.5475006103515625
158
+ 0.3892611861228943,0.46914219856262207,0.5451998710632324
159
+ 0.38850077986717224,0.4678474962711334,0.5428621768951416
160
+ 0.3877239525318146,0.46654823422431946,0.5405064821243286
161
+ 0.3869464695453644,0.46522289514541626,0.538112461566925
162
+ 0.3861497938632965,0.4638678729534149,0.5356920957565308
163
+ 0.3853657841682434,0.4624985158443451,0.5332329273223877
164
+ 0.38455796241760254,0.46110445261001587,0.5307305455207825
165
+ 0.3837466537952423,0.4596961438655853,0.5282090902328491
166
+ 0.38293153047561646,0.45825865864753723,0.5256560444831848
167
+ 0.3821004629135132,0.4567857086658478,0.5230585932731628
168
+ 0.38125377893447876,0.4553035795688629,0.5204059481620789
169
+ 0.38039880990982056,0.45381414890289307,0.5177400708198547
170
+ 0.37953537702560425,0.4522625207901001,0.51502925157547
171
+ 0.3786587119102478,0.45070064067840576,0.5122591257095337
172
+ 0.37777724862098694,0.4491199254989624,0.5094778537750244
173
+ 0.3768831491470337,0.4475046694278717,0.5066356658935547
174
+ 0.3759765923023224,0.44584921002388,0.5037559270858765
175
+ 0.3750581443309784,0.44417664408683777,0.5008141398429871
176
+ 0.3741254210472107,0.4424605667591095,0.49784529209136963
177
+ 0.3731728792190552,0.4407256543636322,0.49482280015945435
178
+ 0.3722169101238251,0.43895307183265686,0.49173426628112793
179
+ 0.3712415397167206,0.437146931886673,0.4886254668235779
180
+ 0.3702535331249237,0.4353068768978119,0.48544999957084656
181
+ 0.3692600429058075,0.4334242045879364,0.48222681879997253
182
+ 0.36825886368751526,0.43151819705963135,0.4789440333843231
183
+ 0.3672138452529907,0.4295523762702942,0.4756050109863281
184
+ 0.3661661744117737,0.42755672335624695,0.47219815850257874
185
+ 0.36509427428245544,0.4255349636077881,0.4687524437904358
186
+ 0.3640132248401642,0.42343634366989136,0.46522557735443115
187
+ 0.36291468143463135,0.4213179647922516,0.46163472533226013
188
+ 0.361807644367218,0.4191555678844452,0.4579890966415405
189
+ 0.3606717586517334,0.416931688785553,0.45428794622421265
190
+ 0.3595023453235626,0.41466543078422546,0.4504820704460144
191
+ 0.35832664370536804,0.41234973073005676,0.4466261863708496
192
+ 0.35711705684661865,0.4099692702293396,0.4426766335964203
193
+ 0.35588809847831726,0.4075412154197693,0.4386715292930603
194
+ 0.35465407371520996,0.40505969524383545,0.43457621335983276
195
+ 0.3533867597579956,0.4025226831436157,0.4303874671459198
196
+ 0.35206907987594604,0.39990124106407166,0.4261210262775421
197
+ 0.3507366180419922,0.39723485708236694,0.4217650294303894
198
+ 0.3493753671646118,0.3945018947124481,0.41731685400009155
199
+ 0.34797582030296326,0.39169758558273315,0.412770539522171
200
+ 0.3465687334537506,0.38880959153175354,0.4081324338912964
201
+ 0.3450928330421448,0.38585978746414185,0.4033951163291931
202
+ 0.3435814380645752,0.3828301429748535,0.3985316753387451
203
+ 0.34203478693962097,0.3797179162502289,0.3935818076133728
204
+ 0.34045159816741943,0.3765098750591278,0.38850077986717224
205
+ 0.3388141989707947,0.3732281029224396,0.3833079934120178
206
+ 0.33710336685180664,0.3698435425758362,0.37798795104026794
207
+ 0.3353625237941742,0.36637309193611145,0.37254491448402405
208
+ 0.3335549533367157,0.36279526352882385,0.3669710159301758
209
+ 0.3316671550273895,0.35910558700561523,0.3612686097621918
210
+ 0.3297062814235687,0.35532188415527344,0.355435848236084
211
+ 0.32766276597976685,0.3513968288898468,0.3494633734226227
212
+ 0.3255484998226166,0.34735408425331116,0.3433668315410614
213
+ 0.3232938051223755,0.34319281578063965,0.33710920810699463
214
+ 0.3209540843963623,0.33887505531311035,0.33072924613952637
215
+ 0.3184846043586731,0.33441248536109924,0.32422009110450745
216
+ 0.3158607482910156,0.3297947347164154,0.31760966777801514
217
+ 0.3130861818790436,0.32502415776252747,0.3108745515346527
218
+ 0.31010496616363525,0.32008421421051025,0.30407488346099854
219
+ 0.30692198872566223,0.31495949625968933,0.2972620725631714
220
+ 0.30347248911857605,0.3096488118171692,0.29043811559677124
221
+ 0.29973623156547546,0.30414119362831116,0.2837555706501007
222
+ 0.2956545352935791,0.2984755039215088,0.2773210108280182
223
+ 0.2911142408847809,0.2926393747329712,0.27137941122055054
224
+ 0.2860235571861267,0.28671175241470337,0.26624587178230286
225
+ 0.2802514433860779,0.2808597981929779,0.26253658533096313
226
+ 0.2736779451370239,0.27554765343666077,0.2614668607711792
227
+ 0.2717041075229645,0.2730359733104706,0.2575526833534241
228
+ 0.26287510991096497,0.2601219117641449,0.23883278667926788
229
+ 0.24991470575332642,0.24898554384708405,0.23122084140777588
230
+ 0.2362339198589325,0.23734308779239655,0.2233717292547226
231
+ 0.22170411050319672,0.22508057951927185,0.2152479737997055
232
+ 0.20608031749725342,0.2121061235666275,0.20679286122322083
233
+ 0.18908119201660156,0.19821850955486298,0.1979656219482422
234
+ 0.17017780244350433,0.18320739269256592,0.18870970606803894
235
+ 0.14857418835163116,0.1667228788137436,0.17891544103622437
236
+ 0.1224466860294342,0.14806653559207916,0.16848649084568024
237
+ 0.1157052144408226,0.1434480845928192,0.16584351658821106
238
+ 0.11541308462619781,0.1431066393852234,0.16541849076747894
239
+ 0.11511941254138947,0.14277100563049316,0.16503721475601196
240
+ 0.11482298374176025,0.14245407283306122,0.16464921832084656
241
+ 0.11456697434186935,0.14208628237247467,0.16424468159675598
242
+ 0.1142779067158699,0.14172402024269104,0.16386310756206512
243
+ 0.11400596797466278,0.14137032628059387,0.1634616255760193
244
+ 0.11372455209493637,0.14102306962013245,0.16305187344551086
245
+ 0.1134345605969429,0.14065253734588623,0.16262774169445038
246
+ 0.11316171288490295,0.14030128717422485,0.16221247613430023
247
+ 0.11287751793861389,0.13994424045085907,0.16180670261383057
248
+ 0.11260131001472473,0.1395883858203888,0.16140402853488922
249
+ 0.11232449859380722,0.13923928141593933,0.1610090136528015
250
+ 0.11203785985708237,0.13889294862747192,0.16057752072811127
251
+ 0.1117628738284111,0.1385435312986374,0.16015218198299408
252
+ 0.11146491020917892,0.13817943632602692,0.15973569452762604
253
+ 0.11116664856672287,0.1378205418586731,0.15932902693748474
254
+ 0.11088062077760696,0.13746285438537598,0.15890979766845703
255
+ 0.11058665812015533,0.13711130619049072,0.15851260721683502
256
+ 0.11027919501066208,0.1367616057395935,0.1581237018108368
257
+ 0.10998125374317169,0.13638892769813538,0.15768671035766602
258
+ 0.10970167070627213,0.13604319095611572,0.15725676715373993
259
+ 0.10939911752939224,0.13565398752689362,0.15681979060173035
260
+ 0.10909074544906616,0.1352950930595398,0.1564066857099533
261
+ 0.10879003256559372,0.13489913940429688,0.1559758335351944
262
+ 0.10849147289991379,0.1345341056585312,0.15554526448249817
263
+ 0.10821433365345001,0.13415195047855377,0.15510676801204681
264
+ 0.1079074963927269,0.13379794359207153,0.15468142926692963
265
+ 0.10759484022855759,0.13342680037021637,0.15426495671272278
266
+ 0.10731127858161926,0.13305748999118805,0.15383288264274597
267
+ 0.10699524730443954,0.13268664479255676,0.15340934693813324
268
+ 0.10669667273759842,0.13230416178703308,0.1529696136713028
269
+ 0.1064268946647644,0.13193699717521667,0.15254120528697968
270
+ 0.10611913353204727,0.13156461715698242,0.1521192193031311
271
+ 0.1058175191283226,0.13117694854736328,0.1516743004322052
272
+ 0.10551097989082336,0.1308014988899231,0.15122537314891815
273
+ 0.10521639138460159,0.1304609775543213,0.15079358220100403
274
+ 0.10489363223314285,0.13008800148963928,0.15031340718269348
275
+ 0.10458341985940933,0.12965714931488037,0.1498822569847107
276
+ 0.10426678508520126,0.12927864491939545,0.14943453669548035
277
+ 0.1039581149816513,0.12887810170650482,0.1490085870027542
278
+ 0.10364361852407455,0.12848612666130066,0.14856210350990295
279
+ 0.10331320762634277,0.12810029089450836,0.1481168419122696
280
+ 0.10299043357372284,0.12771719694137573,0.14766763150691986
281
+ 0.10267441719770432,0.1273200362920761,0.14721347391605377
282
+ 0.1023675799369812,0.12692315876483917,0.14676974713802338
283
+ 0.10203593969345093,0.12652996182441711,0.14631898701190948
284
+ 0.10171746462583542,0.12615300714969635,0.14586883783340454
285
+ 0.10139041393995285,0.12573960423469543,0.14537948369979858
286
+ 0.10108970105648041,0.12536202371120453,0.14490973949432373
287
+ 0.10079051554203033,0.12496238201856613,0.14444120228290558
288
+ 0.1004457101225853,0.12456829100847244,0.14399471879005432
289
+ 0.10012386739253998,0.12417691946029663,0.14353936910629272
290
+ 0.09980478882789612,0.12376443296670914,0.14306899905204773
291
+ 0.09949151426553726,0.12334918975830078,0.14258088171482086
292
+ 0.09916509687900543,0.12293273210525513,0.14212672412395477
293
+ 0.09884171187877655,0.12251226603984833,0.1416735053062439
294
+ 0.09852078557014465,0.12210836261510849,0.1411985605955124
295
+ 0.09817750006914139,0.12170232087373734,0.1407385915517807
296
+ 0.09784004837274551,0.12129472941160202,0.14023515582084656
297
+ 0.09749983251094818,0.12088468670845032,0.13974520564079285
298
+ 0.09715135395526886,0.1204562708735466,0.1392613649368286
299
+ 0.09680776298046112,0.12002327293157578,0.13878118991851807
300
+ 0.09647521376609802,0.11962549388408661,0.1383046954870224
301
+ 0.09612242877483368,0.11921942979097366,0.1378043293952942
302
+ 0.09578743577003479,0.11879591643810272,0.13732844591140747
303
+ 0.09543250501155853,0.1183813065290451,0.13683202862739563
304
+ 0.09508984535932541,0.11794216930866241,0.13635770976543427
305
+ 0.09476371854543686,0.11751621216535568,0.1358637511730194
306
+ 0.09443452209234238,0.1170816719532013,0.13537929952144623
307
+ 0.09406889230012894,0.11664438247680664,0.1348535120487213
308
+ 0.09372377395629883,0.11621322482824326,0.134351909160614
309
+ 0.09337314963340759,0.11576368659734726,0.13386225700378418
310
+ 0.09304028749465942,0.1153288409113884,0.133339524269104
311
+ 0.09269976615905762,0.11487900465726852,0.13282445073127747
312
+ 0.0923454537987709,0.11445211619138718,0.13231703639030457
313
+ 0.09199421107769012,0.11400656402111053,0.1318105310201645
314
+ 0.09161267429590225,0.11357906460762024,0.13130740821361542
315
+ 0.09124826639890671,0.11312799900770187,0.13080213963985443
316
+ 0.09087098389863968,0.11269347369670868,0.1302950233221054
317
+ 0.0905022844672203,0.11225984990596771,0.12975728511810303
318
+ 0.09014523774385452,0.11179438978433609,0.12923914194107056
319
+ 0.08976306766271591,0.11132708936929703,0.12870876491069794
320
+ 0.08938886225223541,0.11086255311965942,0.12817132472991943
321
+ 0.08902323246002197,0.11039526015520096,0.12762410938739777
322
+ 0.08864809572696686,0.10992548614740372,0.12708179652690887
323
+ 0.08828185498714447,0.10947319865226746,0.12655629217624664
324
+ 0.0879003033041954,0.10898998379707336,0.12603938579559326
325
+ 0.08754109591245651,0.10854779183864594,0.12550778687000275
326
+ 0.08715925365686417,0.10806671530008316,0.12495075911283493
327
+ 0.08681657910346985,0.10758838802576065,0.12442434579133987
328
+ 0.08642704784870148,0.10712629556655884,0.12387222796678543
329
+ 0.08604917675256729,0.10667002946138382,0.12330418825149536
330
+ 0.0856575071811676,0.10619384795427322,0.12273460626602173
331
+ 0.08524961024522781,0.10569407045841217,0.12216013669967651
332
+ 0.08483836054801941,0.1052025780081749,0.12161166965961456
333
+ 0.08444516360759735,0.10469333827495575,0.12104639410972595
334
+ 0.08404585719108582,0.10420826822519302,0.12048505991697311
335
+ 0.08364071696996689,0.10371893644332886,0.11991548538208008
336
+ 0.08322608470916748,0.10320570319890976,0.11934958398342133
337
+ 0.0828365683555603,0.10271328687667847,0.11880083382129669
338
+ 0.08243387192487717,0.10221292823553085,0.11820460110902786
339
+ 0.08202169835567474,0.10170641541481018,0.11760316789150238
340
+ 0.08161503076553345,0.10121677070856094,0.11699624359607697
341
+ 0.08121601492166519,0.10073293745517731,0.11640949547290802
342
+ 0.0808185338973999,0.10021143406629562,0.11582309007644653
343
+ 0.08041829615831375,0.09968657046556473,0.11523452401161194
344
+ 0.07999968528747559,0.09916139394044876,0.1146303340792656
345
+ 0.07957036048173904,0.09863561391830444,0.11403869092464447
346
+ 0.07915082573890686,0.09810125827789307,0.11345718801021576
347
+ 0.07870067656040192,0.09754912555217743,0.11282391101121902
348
+ 0.07826031744480133,0.09702548384666443,0.11217807978391647
349
+ 0.07782057672739029,0.09648560732603073,0.11155429482460022
350
+ 0.0773756355047226,0.09595002233982086,0.11092717200517654
351
+ 0.07693282514810562,0.0954236164689064,0.1103067398071289
352
+ 0.07648970931768417,0.0948687344789505,0.10966303944587708
353
+ 0.07605241984128952,0.09435121715068817,0.10905120521783829
354
+ 0.07559767365455627,0.09377827495336533,0.10842254757881165
355
+ 0.07517079263925552,0.09321480989456177,0.10779201239347458
356
+ 0.07470226287841797,0.09263850748538971,0.10710086673498154
357
+ 0.07428242266178131,0.09207534790039062,0.10644432157278061
358
+ 0.07383013516664505,0.09148313105106354,0.10578931123018265
359
+ 0.07337813079357147,0.09090618789196014,0.10509509593248367
360
+ 0.07289184629917145,0.0903424397110939,0.10442569106817245
361
+ 0.07240494340658188,0.08976029604673386,0.10379272699356079
362
+ 0.07192478328943253,0.08917172253131866,0.1031138226389885
363
+ 0.07142409682273865,0.08859387785196304,0.10243583470582962
364
+ 0.07093475013971329,0.08801420032978058,0.10173121094703674
365
+ 0.07044325768947601,0.08741123974323273,0.10102904587984085
366
+ 0.06995910406112671,0.08676818013191223,0.10030879825353622
367
+ 0.06947772204875946,0.08615327626466751,0.09958089888095856
368
+ 0.06896509230136871,0.08553499728441238,0.09889496862888336
369
+ 0.06848003715276718,0.08491000533103943,0.09818513691425323
370
+ 0.06798884272575378,0.08427488803863525,0.09745294600725174
371
+ 0.06751113384962082,0.0836452916264534,0.09674250334501266
372
+ 0.06700585782527924,0.08303101360797882,0.09599654376506805
373
+ 0.06646904349327087,0.08239223062992096,0.09523190557956696
374
+ 0.06592732667922974,0.08174791932106018,0.09444766491651535
375
+ 0.0653623417019844,0.08108188956975937,0.09370844066143036
376
+ 0.06484022736549377,0.08038890361785889,0.09295084327459335
377
+ 0.06428411602973938,0.07972194254398346,0.09216352552175522
378
+ 0.06372066587209702,0.07903385162353516,0.09142736345529556
379
+ 0.06318140774965286,0.07833933085203171,0.09062627702951431
380
+ 0.06263478845357895,0.07766930758953094,0.0897899866104126
381
+ 0.062071334570646286,0.0769805982708931,0.08896960318088531
382
+ 0.061534520238637924,0.07629863172769547,0.08817525953054428
383
+ 0.06096555292606354,0.07558420300483704,0.08734385669231415
384
+ 0.060398418456315994,0.07484344393014908,0.08652285486459732
385
+ 0.059790559113025665,0.0740959420800209,0.08572086691856384
386
+ 0.059174131602048874,0.07335303723812103,0.08482055366039276
387
+ 0.058550044894218445,0.07259542495012283,0.08393312245607376
388
+ 0.05792197212576866,0.07185190171003342,0.08306373655796051
389
+ 0.05730707570910454,0.07108970731496811,0.0821729376912117
390
+ 0.056691866368055344,0.07033699005842209,0.08129469305276871
391
+ 0.05607237294316292,0.06957050412893295,0.08042500168085098
392
+ 0.055435724556446075,0.06874828785657883,0.0794643834233284
393
+ 0.05481715500354767,0.06790462881326675,0.07851538807153702
394
+ 0.054182350635528564,0.0670781210064888,0.07756548374891281
395
+ 0.05346854031085968,0.06624824553728104,0.07659352570772171
396
+ 0.05276361107826233,0.06540275365114212,0.07565678656101227
397
+ 0.05205194652080536,0.06458421051502228,0.07468605041503906
398
+ 0.05134119465947151,0.06373504549264908,0.07364580035209656
399
+ 0.05062953010201454,0.06281912326812744,0.07259513437747955
400
+ 0.04989795759320259,0.061891257762908936,0.07154660671949387
401
+ 0.04918720945715904,0.060972586274147034,0.07051646709442139
402
+ 0.04847554489970207,0.060027871280908585,0.06944466382265091
403
+ 0.04772529378533363,0.05909694358706474,0.06834499537944794
404
+ 0.046907056123018265,0.05815959349274635,0.06719602644443512
405
+ 0.046073514968156815,0.05716956779360771,0.06604368984699249
406
+ 0.04524242505431175,0.056131161749362946,0.06491249054670334
407
+ 0.044410716742277145,0.055077437311410904,0.06375158578157425
408
+ 0.043564312160015106,0.05403565987944603,0.06249513849616051
409
+ 0.04272831603884697,0.05296080932021141,0.06120256334543228
410
+ 0.0418776273727417,0.051906172186136246,0.05995253846049309
411
+ 0.04095129668712616,0.050736699253320694,0.05867007002234459
412
+ 0.039960961788892746,0.049539968371391296,0.05731777474284172
413
+ 0.038953784853219986,0.04831783473491669,0.05587637424468994
414
+ 0.0379573218524456,0.04711682349443436,0.05445212125778198
415
+ 0.036944933235645294,0.04588426649570465,0.05303368717432022
416
+ 0.035924892872571945,0.0445328913629055,0.051456935703754425
417
+ 0.034868404269218445,0.04312395304441452,0.049866095185279846
418
+ 0.03364961966872215,0.04169970005750656,0.04824923351407051
419
+ 0.03240327537059784,0.04026351124048233,0.0465429425239563
420
+ 0.03115847148001194,0.038714926689863205,0.0447101853787899
421
+ 0.029900187626481056,0.03702608868479729,0.0428403802216053
422
+ 0.02859964594244957,0.035308461636304855,0.040890950709581375
423
+ 0.027002375572919846,0.0335911326110363,0.03868337348103523
424
+ 0.025380603969097137,0.03152747079730034,0.03650704026222229
425
+ 0.023741384968161583,0.029405318200588226,0.033972397446632385
426
+ 0.021979982033371925,0.02722162753343582,0.03135690093040466
427
+ 0.01972923055291176,0.02449806034564972,0.02834027074277401
428
+ 0.017425181344151497,0.02169947512447834,0.025121228769421577
429
+ 0.014567777514457703,0.017999667674303055,0.02086562104523182
430
+ 0.010865208692848682,0.013364309445023537,0.015476967208087444
431
+ 0.00415243161842227,0.004356072284281254,0.004636881407350302
432
+ 0.0,0.0,0.0
433
+ 0.0,0.0,0.0
434
+ 0.0,0.0,0.0
435
+ 0.0,0.0,0.0
436
+ 0.0,0.0,0.0
437
+ 0.0,0.0,0.0
438
+ 0.0,0.0,0.0
439
+ 0.0,0.0,0.0
440
+ 0.0,0.0,0.0
441
+ 0.0,0.0,0.0
442
+ 0.0,0.0,0.0
443
+ 0.0,0.0,0.0
444
+ 0.0,0.0,0.0
445
+ 0.0,0.0,0.0
446
+ 0.0,0.0,0.0
447
+ 0.0,0.0,0.0
448
+ 0.0,0.0,0.0
449
+ 0.0,0.0,0.0
450
+ 0.0,0.0,0.0
451
+ 0.0,0.0,0.0
452
+ 0.0,0.0,0.0
453
+ 0.0,0.0,0.0
454
+ 0.0,0.0,0.0
455
+ 0.0,0.0,0.0
456
+ 0.0,0.0,0.0
457
+ 0.0,0.0,0.0
458
+ 0.0,0.0,0.0
459
+ 0.0,0.0,0.0
460
+ 0.0,0.0,0.0
461
+ 0.0,0.0,0.0
462
+ 0.0,0.0,0.0
463
+ 0.0,0.0,0.0
464
+ 0.0,0.0,0.0
465
+ 0.0,0.0,0.0
466
+ 0.0,0.0,0.0
467
+ 0.0,0.0,0.0
468
+ 0.0,0.0,0.0
469
+ 0.0,0.0,0.0
470
+ 0.0,0.0,0.0
471
+ 0.0,0.0,0.0
472
+ 0.0,0.0,0.0
473
+ 0.0,0.0,0.0
474
+ 0.0,0.0,0.0
475
+ 0.0,0.0,0.0
476
+ 0.0,0.0,0.0
477
+ 0.0,0.0,0.0
478
+ 0.0,0.0,0.0
479
+ 0.0,0.0,0.0
480
+ 0.0,0.0,0.0
481
+ 0.0,0.0,0.0
482
+ 0.0,0.0,0.0
483
+ 0.0,0.0,0.0
484
+ 0.0,0.0,0.0
485
+ 0.0,0.0,0.0
486
+ 0.0,0.0,0.0
487
+ 0.0,0.0,0.0
488
+ 0.0,0.0,0.0
489
+ 0.0,0.0,0.0
490
+ 0.0,0.0,0.0
491
+ 0.0,0.0,0.0
492
+ 0.0,0.0,0.0
493
+ 0.0,0.0,0.0
494
+ 0.0,0.0,0.0
495
+ 0.0,0.0,0.0
496
+ 0.0,0.0,0.0
497
+ 0.0,0.0,0.0
498
+ 0.0,0.0,0.0
499
+ 0.0,0.0,0.0
500
+ 0.0,0.0,0.0
501
+ 0.0,0.0,0.0
502
+ 0.0,0.0,0.0
503
+ 0.0,0.0,0.0
504
+ 0.0,0.0,0.0
505
+ 0.0,0.0,0.0
506
+ 0.0,0.0,0.0
507
+ 0.0,0.0,0.0
508
+ 0.0,0.0,0.0
509
+ 0.0,0.0,0.0
510
+ 0.0,0.0,0.0
511
+ 0.0,0.0,0.0
512
+ 0.0,0.0,0.0
513
+ 0.0,0.0,0.0
514
+ 0.0,0.0,0.0
515
+ 0.0,0.0,0.0
516
+ 0.0,0.0,0.0
517
+ 0.0,0.0,0.0
518
+ 0.0,0.0,0.0
519
+ 0.0,0.0,0.0
520
+ 0.0,0.0,0.0
521
+ 0.0,0.0,0.0
522
+ 0.0,0.0,0.0
523
+ 0.0,0.0,0.0
524
+ 0.0,0.0,0.0
525
+ 0.0,0.0,0.0
526
+ 0.0,0.0,0.0
527
+ 0.0,0.0,0.0
528
+ 0.0,0.0,0.0
529
+ 0.0,0.0,0.0
530
+ 0.0,0.0,0.0
531
+ 0.0,0.0,0.0
532
+ 0.0,0.0,0.0
533
+ 0.0,0.0,0.0
534
+ 0.0,0.0,0.0
535
+ 0.0,0.0,0.0
536
+ 0.0,0.0,0.0
537
+ 0.0,0.0,0.0
538
+ 0.0,0.0,0.0
539
+ 0.0,0.0,0.0
540
+ 0.0,0.0,0.0
541
+ 0.0,0.0,0.0
542
+ 0.0,0.0,0.0
543
+ 0.0,0.0,0.0
544
+ 0.0,0.0,0.0
545
+ 0.0,0.0,0.0
546
+ 0.0,0.0,0.0
547
+ 0.0,0.0,0.0
548
+ 0.0,0.0,0.0
549
+ 0.0,0.0,0.0
550
+ 0.0,0.0,0.0
imgs_test/zurich_000116_000019_leftImg8bit_1.png ADDED

Git LFS Details

  • SHA256: 6a8fb9983859a41887309ff0548c0677207e3c022764e0fa22ab0366da247985
  • Pointer size: 131 Bytes
  • Size of remote file: 308 kB
infer.ipynb ADDED
@@ -0,0 +1,390 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ " # CoMoGan\n",
8
+ "Notebook to test our model after training."
9
+ ]
10
+ },
11
+ {
12
+ "cell_type": "code",
13
+ "metadata": {
14
+ "ExecuteTime": {
15
+ "end_time": "2024-08-24T11:36:37.046598Z",
16
+ "start_time": "2024-08-24T11:36:35.526168Z"
17
+ }
18
+ },
19
+ "source": [
20
+ "import ipywidgets as widgets\n",
21
+ "import pytorch_lightning as pl\n",
22
+ "import pathlib\n",
23
+ "import torch\n",
24
+ "import yaml\n",
25
+ "import os\n",
26
+ "\n",
27
+ "from math import pi\n",
28
+ "from PIL import Image\n",
29
+ "from munch import Munch\n",
30
+ "from threading import Timer\n",
31
+ "from IPython.display import clear_output\n",
32
+ "from torchvision.transforms import ToPILImage\n",
33
+ "\n",
34
+ "from data import create_dataset\n",
35
+ "from torchvision.transforms import ToTensor\n",
36
+ "from data.base_dataset import get_transform\n",
37
+ "from networks import find_model_using_name, create_model"
38
+ ],
39
+ "outputs": [],
40
+ "execution_count": 1
41
+ },
42
+ {
43
+ "cell_type": "markdown",
44
+ "metadata": {},
45
+ "source": [
46
+ "## Load the model with a checkpoint\n",
47
+ "Choose the directory that contains the checkpoint that you want."
48
+ ]
49
+ },
50
+ {
51
+ "cell_type": "code",
52
+ "metadata": {},
53
+ "source": [
54
+ "import pathlib\n",
55
+ "\n",
56
+ "# Load names of directories inside /logs\n",
57
+ "p = pathlib.Path('./logs')\n",
58
+ "\n",
59
+ "# Use x.name to get the directory name instead of splitting the path\n",
60
+ "list_run_id = [x.name for x in p.iterdir() if x.is_dir()]\n",
61
+ "\n",
62
+ "import ipywidgets as widgets\n",
63
+ "from IPython.display import display, clear_output\n",
64
+ "import os\n",
65
+ "\n",
66
+ "w_run = widgets.Dropdown(options=list_run_id,\n",
67
+ " description='Select RUN_ID',\n",
68
+ " disabled=False,\n",
69
+ " style=dict(description_width='initial'))\n",
70
+ "\n",
71
+ "\n",
72
+ "w_check = None\n",
73
+ "root_dir = None\n",
74
+ "\n",
75
+ "def on_value_change_check(change):\n",
76
+ " global w_check, w_run, root_dir\n",
77
+ " \n",
78
+ " clear_output(wait=True)\n",
79
+ " \n",
80
+ " root_dir = os.path.join('logs', w_run.value, 'tensorboard', 'default', 'version_0')\n",
81
+ " p = pathlib.Path(root_dir + '/checkpoints')\n",
82
+ " \n",
83
+ " # Load a list of checkpoints, use the last one by default\n",
84
+ " list_checkpoint = [x.name for x in p.iterdir() if 'iter' in x.name]\n",
85
+ " list_checkpoint.sort(reverse=True, key=lambda x: int(x.split('_')[1].split('.pth')[0]))\n",
86
+ " \n",
87
+ " w_check = widgets.Dropdown(options=list_checkpoint,\n",
88
+ " description='Select checkpoint',\n",
89
+ " disabled=False,\n",
90
+ " style=dict(description_width='initial'))\n",
91
+ " display(widgets.HBox([w_run, w_check]))\n",
92
+ "\n",
93
+ "on_value_change_check({'new': w_run.value})\n",
94
+ "w_run.observe(on_value_change_check, names='value')\n"
95
+ ],
96
+ "execution_count": 2,
97
+ "outputs": []
98
+ },
99
+ {
100
+ "cell_type": "code",
101
+ "metadata": {
102
+ "ExecuteTime": {
103
+ "end_time": "2024-08-24T11:36:39.368141Z",
104
+ "start_time": "2024-08-24T11:36:37.080571Z"
105
+ }
106
+ },
107
+ "source": [
108
+ "RUN_ID = w_run.value\n",
109
+ "CHECKPOINT = w_check.value\n",
110
+ "\n",
111
+ "# Load parameters\n",
112
+ "with open(os.path.join(root_dir, 'hparams.yaml')) as cfg_file:\n",
113
+ " opt = Munch(yaml.safe_load(cfg_file))\n",
114
+ "\n",
115
+ "opt.no_flip = True\n",
116
+ "# Load parameters to the model, load the checkpoint\n",
117
+ "model = create_model(opt)\n",
118
+ "model = model.load_from_checkpoint((os.path.join(root_dir, 'checkpoints/', CHECKPOINT)))\n",
119
+ "# Transfer the model to the GPU\n",
120
+ "model.to('cpu');"
121
+ ],
122
+ "outputs": [],
123
+ "execution_count": 3
124
+ },
125
+ {
126
+ "cell_type": "markdown",
127
+ "metadata": {},
128
+ "source": [
129
+ "## Load the validation dataset"
130
+ ]
131
+ },
132
+ {
133
+ "cell_type": "code",
134
+ "metadata": {
135
+ "ExecuteTime": {
136
+ "end_time": "2024-08-24T11:36:39.383167Z",
137
+ "start_time": "2024-08-24T11:36:39.370142Z"
138
+ }
139
+ },
140
+ "source": [
141
+ "import pathlib\n",
142
+ "from PIL import Image\n",
143
+ "\n",
144
+ "# Set opt.dataroot to the imgs_test directory\n",
145
+ "opt.dataroot = 'imgs_test/'\n",
146
+ "\n",
147
+ "# Load paths of all files contained in /imgs_test\n",
148
+ "p = pathlib.Path(opt.dataroot)\n",
149
+ "dataset_paths = [str(x.relative_to(opt.dataroot)) for x in p.iterdir()]\n",
150
+ "dataset_paths.sort()\n",
151
+ "\n",
152
+ "sequence_name = {}\n",
153
+ "# Make a dict with each sequence name as a key and\n",
154
+ "# a list of paths to the images of the sequence as a value\n",
155
+ "for file in dataset_paths:\n",
156
+ " # Keep only the sequence part contained in the name of the image\n",
157
+ " strip_name = file.split('_')[0]\n",
158
+ " \n",
159
+ " if strip_name not in sequence_name:\n",
160
+ " sequence_name[strip_name] = [file]\n",
161
+ " else:\n",
162
+ " sequence_name[strip_name].append(file)\n"
163
+ ],
164
+ "outputs": [],
165
+ "execution_count": 4
166
+ },
167
+ {
168
+ "cell_type": "markdown",
169
+ "metadata": {},
170
+ "source": [
171
+ "## Select a sequence, an image and the moment of the day\n",
172
+ "Select the sequence on which you want to work before choosing which image should be used.<br>\n",
173
+ "Select the moment of the day, by choosing the angle of the sun $\\phi$ between [0,2$\\pi$],\n",
174
+ "which maps to a sun elevation ∈ [+30◦,−40◦]\n",
175
+ "<ul>\n",
176
+ "<li>0 means day</li>\n",
177
+ "<li>$\\pi$/2 means dusk</li>\n",
178
+ "<li>$\\pi$ means night</li>\n",
179
+ "<li>$\\pi$ + $\\pi$/2 means dawn</li>\n",
180
+ "<li>2$\\pi$ means day (again)</li>\n",
181
+ "</ul>"
182
+ ]
183
+ },
184
+ {
185
+ "cell_type": "code",
186
+ "metadata": {
187
+ "scrolled": true,
188
+ "ExecuteTime": {
189
+ "end_time": "2024-08-24T11:40:52.233134Z",
190
+ "start_time": "2024-08-24T11:40:45.511403Z"
191
+ }
192
+ },
193
+ "source": [
194
+ "def drop_list():\n",
195
+ " # Select the sequence on which you want to make your test\n",
196
+ " return widgets.Dropdown(options=sequence_name.keys(),\n",
197
+ " description='Select sequence',\n",
198
+ " disabled=False,\n",
199
+ " style=dict(description_width='initial'))\n",
200
+ "def slider_img():\n",
201
+ " # Select an image from the sequence\n",
202
+ " return widgets.IntSlider(min=0, max=len(sequence_name[w_seq.value]) - 1,\n",
203
+ " description='Select image')\n",
204
+ "def slider_time():\n",
205
+ " # Select time\n",
206
+ " return widgets.FloatSlider(value=0, min=0, max=pi*2, step=0.01,\n",
207
+ " description='Select time',\n",
208
+ " readout_format='.2f')\n",
209
+ "\n",
210
+ "def debounce(wait):\n",
211
+ " # Decorator that will debounce the call to a function\n",
212
+ " def decorator(fn):\n",
213
+ " timer = None\n",
214
+ " def debounced(*args, **kwargs):\n",
215
+ " nonlocal timer\n",
216
+ " def call_it():\n",
217
+ " fn(*args, **kwargs)\n",
218
+ " if timer is not None:\n",
219
+ " timer.cancel()\n",
220
+ " timer = Timer(wait, call_it)\n",
221
+ " timer.start()\n",
222
+ " return debounced\n",
223
+ " return decorator\n",
224
+ " \n",
225
+ "def inference(seq, index_img, phi, output = True):\n",
226
+ " global sequence_name, w_img_time, w_seq, opt, out\n",
227
+ " # Load the image\n",
228
+ " A_path = os.path.join(opt.dataroot, sequence_name[seq.value][index_img])\n",
229
+ " A_img = Image.open(A_path).convert('RGB')\n",
230
+ " # Apply image transformation\n",
231
+ " A = get_transform(opt, convert=False)(A_img)\n",
232
+ " # Normalization between -1 and 1\n",
233
+ " img_real = (((ToTensor()(A)) * 2) - 1).unsqueeze(0)\n",
234
+ " # Forward our image into the model with the specified ɸ\n",
235
+ " img_fake = model.forward(img_real.cpu(), phi.cpu()) \n",
236
+ " # Encapsulate the initial image beside our result\n",
237
+ " new_im = Image.new('RGB', (A_img.width * 2, A_img.height))\n",
238
+ " new_im.paste(A_img, (0, 0))\n",
239
+ " new_im.paste(ToPILImage()((img_fake[0].cpu() + 1) / 2), (A_img.width, 0))\n",
240
+ " # Clear the output and display the widgets and the images\n",
241
+ " if output:\n",
242
+ " out.clear_output(wait=True)\n",
243
+ " with out:\n",
244
+ " # Resize the output\n",
245
+ " O_img = new_im.resize((new_im.width // 2, new_im.height // 2))\n",
246
+ " display(w_img_time, O_img)\n",
247
+ " display(out)\n",
248
+ " \n",
249
+ " return new_im\n",
250
+ "\n",
251
+ "@debounce(0.2)\n",
252
+ "def on_value_change_img(change):\n",
253
+ " global w_seq, w_time\n",
254
+ " inference(w_seq, change['new'], torch.tensor(w_time.value))\n",
255
+ " \n",
256
+ "@debounce(0.2)\n",
257
+ "def on_value_change_time(change):\n",
258
+ " global w_seq, w_img\n",
259
+ " inference(w_seq, w_img.value, torch.tensor(change['new']))\n",
260
+ " \n",
261
+ "def on_value_change_seq(change):\n",
262
+ " global w_seq, w_img, w_time\n",
263
+ " w_img = slider_img()\n",
264
+ " w_time = slider_time()\n",
265
+ " inference(w_seq, w_img.value, torch.tensor(w_time.value))\n",
266
+ " \n",
267
+ "w_seq = drop_list()\n",
268
+ "w_img = slider_img()\n",
269
+ "w_time = slider_time()\n",
270
+ "w_img_time = widgets.VBox([w_seq, widgets.HBox([w_img, w_time])])\n",
271
+ "# Set the size of the output cell\n",
272
+ "out = widgets.Output(layout=widgets.Layout(width='auto', height='240px'))\n",
273
+ "\n",
274
+ "inference(w_seq, w_img.value, torch.tensor(w_time.value))\n",
275
+ "w_img.observe(on_value_change_img, names='value')\n",
276
+ "w_time.observe(on_value_change_time, names='value')\n",
277
+ "w_seq.observe(on_value_change_seq, names='value')"
278
+ ],
279
+ "outputs": [
280
+ {
281
+ "data": {
282
+ "text/plain": [
283
+ "Output(layout=Layout(height='240px', width='auto'))"
284
+ ],
285
+ "application/vnd.jupyter.widget-view+json": {
286
+ "version_major": 2,
287
+ "version_minor": 0,
288
+ "model_id": "665fdbd928f148d19f3e341ab0993ab7"
289
+ }
290
+ },
291
+ "metadata": {},
292
+ "output_type": "display_data"
293
+ }
294
+ ],
295
+ "execution_count": 7
296
+ },
297
+ {
298
+ "cell_type": "markdown",
299
+ "metadata": {},
300
+ "source": [
301
+ "## Sequence to video\n",
302
+ "The code below translates a sequence of images into a video.<br>\n",
303
+ "By default, the 'Select time' slider is on 'Variable phi' ($\\phi$), in this case the time parameter will progressively increase from 0 to 2$\\pi$.<br>\n",
304
+ "2$\\pi$ is reached at the end of the video. If you move the slider, you can select a fixed $\\phi$.<br>\n",
305
+ "Require to install two more packages <code>pip install imageio imageio-ffmpeg<code>"
306
+ ]
307
+ },
308
+ {
309
+ "cell_type": "code",
310
+ "execution_count": null,
311
+ "metadata": {},
312
+ "outputs": [],
313
+ "source": [
314
+ "from IPython.display import Video\n",
315
+ "import numpy as np\n",
316
+ "import imageio\n",
317
+ "\n",
318
+ "w_button = widgets.Button(description='Start', tooltip='Start the inference of a sequence',\n",
319
+ " icon='play')\n",
320
+ "\n",
321
+ "phi_opt = ['Variable phi'] + [str(round(f, 2)) for f in np.arange(0, pi*2, 0.01)]\n",
322
+ "w_vid_time = widgets.SelectionSlider(options=phi_opt, value=phi_opt[0], description='Select time : ')\n",
323
+ "\n",
324
+ "w_vid_seq = drop_list()\n",
325
+ "\n",
326
+ "w_button_seq = widgets.VBox([widgets.HBox([w_vid_seq, w_vid_time]), w_button])\n",
327
+ "display(w_button_seq)\n",
328
+ "\n",
329
+ "def get_video(bt):\n",
330
+ " global sequence_name, w_vid_seq, w_vid_time, w_button_seq\n",
331
+ " \n",
332
+ " clear_output(wait=True)\n",
333
+ " display(w_button_seq)\n",
334
+ " seq_size = len(sequence_name[w_vid_seq.value])\n",
335
+ " # Display progress bar\n",
336
+ " w_prog = widgets.IntProgress(value=0, min=0, max=seq_size, description='Loading:')\n",
337
+ " display(w_prog)\n",
338
+ " # Create a videos directory to save our video\n",
339
+ " save_name = str(pathlib.Path('.').absolute()) + '/videos/'\n",
340
+ " os.makedirs(save_name, exist_ok=True)\n",
341
+ " # If variable phi\n",
342
+ " if w_vid_time.value == 'Variable phi':\n",
343
+ " # Write our video in the project folder\n",
344
+ " save_name += 'comogan_{}_phi_{}.mp4'.format(w_vid_seq.value.replace('segment-', 'seg_'),\n",
345
+ " 'variable')\n",
346
+ " else:\n",
347
+ " save_name += 'comogan_{}_phi_{}.mp4'.format(w_vid_seq.value.replace('segment-', 'seg_'),\n",
348
+ " w_vid_time.value.replace('.', '_'))\n",
349
+ " writer = imageio.get_writer(save_name, fps=10)\n",
350
+ " # Loop on the images contained in the sequence\n",
351
+ " for i in range(seq_size):\n",
352
+ " if w_vid_time.value == 'Variable phi':\n",
353
+ " # Inference of the i image in sequence_name[key]\n",
354
+ " phi_var = 2*pi/seq_size * i\n",
355
+ " my_img = inference(w_vid_seq, i, torch.tensor(phi_var), False)\n",
356
+ " else:\n",
357
+ " my_img = inference(w_vid_seq, i, torch.tensor(float(w_vid_time.value)), False)\n",
358
+ " writer.append_data(np.array(my_img))\n",
359
+ " # Progress bar\n",
360
+ " w_prog.value += 1\n",
361
+ " \n",
362
+ " writer.close()\n",
363
+ " display(Video(save_name, embed=True))\n",
364
+ "\n",
365
+ "w_button.on_click(get_video)"
366
+ ]
367
+ }
368
+ ],
369
+ "metadata": {
370
+ "kernelspec": {
371
+ "display_name": "Python 3",
372
+ "language": "python",
373
+ "name": "python3"
374
+ },
375
+ "language_info": {
376
+ "codemirror_mode": {
377
+ "name": "ipython",
378
+ "version": 3
379
+ },
380
+ "file_extension": ".py",
381
+ "mimetype": "text/x-python",
382
+ "name": "python",
383
+ "nbconvert_exporter": "python",
384
+ "pygments_lexer": "ipython3",
385
+ "version": "3.7.5"
386
+ }
387
+ },
388
+ "nbformat": 4,
389
+ "nbformat_minor": 2
390
+ }
logs/pretrained/tensorboard/default/version_0/checkpoints/iter_000000.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0986dd0ac3cf8ff8147304359fba9fa198155c3ba08875aef37ee3da15d1b841
3
+ size 741391945
logs/pretrained/tensorboard/default/version_0/hparams.yaml ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ batch_size: 1
2
+ beta1: 0.5
3
+ dataroot: /datasets_local/datasets_fpizzati/waymo/train
4
+ dataset_mode: day2timelapse
5
+ decay_iters_step: 100000
6
+ decay_step_gamma: 0.5
7
+ disc_activ: lrelu
8
+ disc_dim: 64
9
+ disc_n_layer: 4
10
+ disc_norm: none
11
+ disc_pad_type: reflect
12
+ display_freq: 10000
13
+ gan_mode: lsgan
14
+ gen_activ: relu
15
+ gen_dim: 64
16
+ gen_pad_type: reflect
17
+ gpu_ids:
18
+ - 4
19
+ init_gain: 0.02
20
+ init_type_disc: normal
21
+ init_type_gen: kaiming
22
+ input_nc: 3
23
+ lambda_Phinet_A: 1
24
+ lambda_compare: 10
25
+ lambda_gan: 1
26
+ lambda_idt: 1
27
+ lambda_physics: 10
28
+ lambda_physics_compare: 1
29
+ lambda_rec_content: 1
30
+ lambda_rec_cycle: 10
31
+ lambda_rec_image: 10
32
+ lambda_rec_style: 1
33
+ lambda_vgg: 0.1
34
+ lr: 0.0001
35
+ lr_policy: step
36
+ max_dataset_size: .inf
37
+ mlp_dim: 256
38
+ model: comomunit
39
+ n_downsample: 2
40
+ n_res: 4
41
+ no_flip: false
42
+ num_scales: 3
43
+ num_threads: 4
44
+ opt: !munch.Munch
45
+ batch_size: 1
46
+ beta1: 0.5
47
+ dataroot: /datasets_local/datasets_fpizzati/waymo/train
48
+ dataset_mode: day2timelapse
49
+ decay_iters_step: 100000
50
+ decay_step_gamma: 0.5
51
+ disc_activ: lrelu
52
+ disc_dim: 64
53
+ disc_n_layer: 4
54
+ disc_norm: none
55
+ disc_pad_type: reflect
56
+ display_freq: 10000
57
+ gan_mode: lsgan
58
+ gen_activ: relu
59
+ gen_dim: 64
60
+ gen_pad_type: reflect
61
+ gpu_ids:
62
+ - 4
63
+ init_gain: 0.02
64
+ init_type_disc: normal
65
+ init_type_gen: kaiming
66
+ input_nc: 3
67
+ lambda_Phinet_A: 1
68
+ lambda_compare: 10
69
+ lambda_gan: 1
70
+ lambda_idt: 1
71
+ lambda_physics: 10
72
+ lambda_physics_compare: 1
73
+ lambda_rec_content: 1
74
+ lambda_rec_cycle: 10
75
+ lambda_rec_image: 10
76
+ lambda_rec_style: 1
77
+ lambda_vgg: 0.1
78
+ lr: 0.0001
79
+ lr_policy: step
80
+ max_dataset_size: .inf
81
+ mlp_dim: 256
82
+ model: comomunit
83
+ n_downsample: 2
84
+ n_res: 4
85
+ no_flip: false
86
+ num_scales: 3
87
+ num_threads: 4
88
+ output_nc: 3
89
+ preprocess: none
90
+ print_freq: 10
91
+ resblocks_cont: 1
92
+ save_epoch_freq: 5
93
+ save_latest_freq: 35000
94
+ serial_batches: false
95
+ style_dim: 8
96
+ total_iterations: 30000000
97
+ output_nc: 3
98
+ preprocess: none
99
+ print_freq: 10
100
+ resblocks_cont: 1
101
+ save_epoch_freq: 5
102
+ save_latest_freq: 35000
103
+ serial_batches: false
104
+ style_dim: 8
105
+ total_iterations: 30000000
networks/__init__.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ This enables dynamic loading of models, similarly to what happens with the dataset.
3
+ """
4
+
5
+ import importlib
6
+ from networks.base_model import BaseModel
7
+
8
+
9
+ def find_model_using_name(model_name):
10
+ """Import the module "networks/[model_name]_model.py".
11
+
12
+ In the file, the class called DatasetNameModel() will
13
+ be instantiated. It has to be a subclass of BaseModel,
14
+ and it is case-insensitive.
15
+ """
16
+ model_filename = "networks." + model_name + "_model"
17
+ modellib = importlib.import_module(model_filename)
18
+ model = None
19
+ target_model_name = model_name.replace('_', '') + 'model'
20
+ for name, cls in modellib.__dict__.items():
21
+ if name.lower() == target_model_name.lower() \
22
+ and issubclass(cls, BaseModel):
23
+ model = cls
24
+
25
+ if model is None:
26
+ print("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (model_filename, target_model_name))
27
+ exit(0)
28
+
29
+ return model
30
+
31
+
32
+ def get_model_options(model_name):
33
+ model_filename = "networks." + model_name + "_model"
34
+ modellib = importlib.import_module(model_filename)
35
+ for name, cls in modellib.__dict__.items():
36
+ if name.lower() == 'modeloptions':
37
+ return cls
38
+ return None
39
+
40
+ def create_model(opt):
41
+ """Create a model given the option.
42
+
43
+ This function warps the class CustomDatasetDataLoader.
44
+ This is the main interface between this package and 'train.py'/'test.py'
45
+
46
+ Example:
47
+ >>> from networks import create_model
48
+ >>> model = create_model(opt)
49
+ """
50
+ model = find_model_using_name(opt.model)
51
+ instance = model(opt)
52
+ return instance
networks/__pycache__/__init__.cpython-37.pyc ADDED
Binary file (1.73 kB). View file
 
networks/__pycache__/base_model.cpython-37.pyc ADDED
Binary file (4.66 kB). View file
 
networks/__pycache__/comomunit_model.cpython-37.pyc ADDED
Binary file (11.1 kB). View file
 
networks/backbones/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .functions import *
networks/backbones/__pycache__/__init__.cpython-37.pyc ADDED
Binary file (158 Bytes). View file
 
networks/backbones/__pycache__/comomunit.cpython-37.pyc ADDED
Binary file (23.3 kB). View file
 
networks/backbones/__pycache__/functions.cpython-37.pyc ADDED
Binary file (3.93 kB). View file
 
networks/backbones/comomunit.py ADDED
@@ -0,0 +1,706 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ comomunit.py
3
+ In this file all architectural components of CoMo-MUNIT are defined. The *logic* is not defined here, but in the *_model.py files.
4
+ Most of the code is copied from https://github.com/NVlabs/MUNIT
5
+ Thttps://github.com/junyanz/pytorch-CycleGAN-and-pix2pixhere are some additional function to get compatibility with the CycleGAN codebase (https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)
6
+ """
7
+
8
+ import torch
9
+ import torch.nn as nn
10
+ from torch.nn import init
11
+ import functools
12
+ from torch.optim import lr_scheduler
13
+ import torch.nn.functional as F
14
+ from .functions import init_net, init_weights, get_scheduler
15
+
16
+
17
+ ########################################################################################################################
18
+ # MUNIT architecture
19
+ ########################################################################################################################
20
+
21
+ ## Functions to get generator / discriminator / DRB
22
+ def define_G_munit(input_nc, output_nc, gen_dim, style_dim, n_downsample, n_res,
23
+ pad_type, mlp_dim, activ='relu', init_type = 'kaiming', init_gain=0.02, gpu_ids=[]):
24
+ gen = AdaINGen(input_nc, output_nc, gen_dim, style_dim, n_downsample, n_res, activ, pad_type, mlp_dim)
25
+ return init_net(gen, init_type=init_type, init_gain = init_gain, gpu_ids = gpu_ids)
26
+
27
+ def define_D_munit(input_nc, disc_dim, norm, activ, n_layer, gan_type, num_scales, pad_type,
28
+ init_type = 'kaiming', init_gain = 0.02, gpu_ids = [], output_channels = 1, final_function = None):
29
+ disc = MsImageDis(input_nc, n_layer, gan_type, disc_dim, norm, activ, num_scales, pad_type, output_channels, final_function = final_function)
30
+ return init_net(disc, init_type=init_type, init_gain = init_gain, gpu_ids = gpu_ids)
31
+
32
+ def define_DRB_munit(resblocks, dim, norm, activation, pad_type,
33
+ init_type = 'kaiming', init_gain = 0.02, gpu_ids = []):
34
+ demux = DRB(resblocks, dim, norm, activation, pad_type)
35
+ return init_net(demux, init_type = init_type, init_gain = init_gain, gpu_ids = gpu_ids)
36
+
37
+ # This class has been strongly modified from MUNIT default version. We split the default MUNIT decoder
38
+ # in AdaINBlock + DecoderNoAdain because the DRB must be placed between the two. encode/assign_adain/decode
39
+ # are called by the network logic following CoMo-MUNIT implementation.
40
+ class AdaINGen(nn.Module):
41
+ # AdaIN auto-encoder architecture
42
+ def __init__(self, input_dim, output_dim, dim, style_dim, n_downsample, n_res, activ, pad_type, mlp_dim):
43
+ super(AdaINGen, self).__init__()
44
+
45
+ # style encoder
46
+ self.enc_style = StyleEncoder(4, input_dim, dim, style_dim, norm='none', activ=activ, pad_type=pad_type)
47
+
48
+ # content encoder
49
+ self.enc_content = ContentEncoder(n_downsample, n_res, input_dim, dim, 'instance', activ, pad_type=pad_type)
50
+ self.adainblock = AdaINBlock(n_downsample, n_res, self.enc_content.output_dim, output_dim, res_norm='adain', activ=activ, pad_type=pad_type)
51
+ self.dec = DecoderNoAdain(n_downsample, n_res, self.enc_content.output_dim, output_dim, res_norm='adain', activ=activ, pad_type=pad_type)
52
+ # MLP to generate AdaIN parameters
53
+ self.mlp = MLP(style_dim, self.get_num_adain_params(self.adainblock), mlp_dim, 3, norm='none', activ=activ)
54
+
55
+ def forward(self, images):
56
+ # reconstruct an image
57
+ content, style_fake = self.encode(images)
58
+ images_recon = self.decode(content, style_fake)
59
+ return images_recon
60
+
61
+ def encode(self, images):
62
+ # encode an image to its content and style codes
63
+ style_fake = self.enc_style(images)
64
+ content = self.enc_content(images)
65
+ return content, style_fake
66
+
67
+ def assign_adain(self, content, style):
68
+ # decode content and style codes to an image
69
+ adain_params = self.mlp(style)
70
+ self.assign_adain_params(adain_params, self.adainblock)
71
+ features = self.adainblock(content)
72
+ return features
73
+
74
+ def decode(self, features):
75
+ return self.dec(features)
76
+
77
+ def assign_adain_params(self, adain_params, model):
78
+ # assign the adain_params to the AdaIN layers in model
79
+ for m in model.modules():
80
+ if m.__class__.__name__ == "AdaptiveInstanceNorm2d":
81
+ mean = adain_params[:, :m.num_features]
82
+ std = adain_params[:, m.num_features:2*m.num_features]
83
+ m.bias = mean.contiguous().view(-1)
84
+ m.weight = std.contiguous().view(-1)
85
+ if adain_params.size(1) > 2*m.num_features:
86
+ adain_params = adain_params[:, 2*m.num_features:]
87
+
88
+ def get_num_adain_params(self, model):
89
+ # return the number of AdaIN parameters needed by the model
90
+ num_adain_params = 0
91
+ for m in model.modules():
92
+ if m.__class__.__name__ == "AdaptiveInstanceNorm2d":
93
+ num_adain_params += 2*m.num_features
94
+ return num_adain_params
95
+
96
+ # This is the FIN layer for cyclic encoding. It's our contribution and it does not exist in MUNIT.
97
+ class FIN2dCyclic(nn.Module):
98
+ def __init__(self, dim):
99
+ super().__init__()
100
+ self.instance_norm = nn.InstanceNorm2d(dim, affine=False)
101
+ self.a_gamma = nn.Parameter(torch.zeros(dim))
102
+ self.b_gamma = nn.Parameter(torch.ones(dim))
103
+ self.a_beta = nn.Parameter(torch.zeros(dim))
104
+ self.b_beta = nn.Parameter(torch.zeros(dim))
105
+
106
+ def forward(self, x, cos, sin):
107
+ # The only way to encode something cyclic is to map gamma and beta to an ellipse point (x,y).
108
+ # We are trying to learn their cyclic manner associating cos(continuity) to gamma and sin(continuity to beta)
109
+ # Sin and cos are randomly sampled between -1 and 1, we know that they will be associated to one point
110
+ gamma = self.a_gamma * cos.unsqueeze(-1) + self.b_gamma
111
+ beta = self.a_beta * sin.unsqueeze(-1) + self.b_beta
112
+
113
+ return self.instance_norm(x) * gamma.unsqueeze(-1).unsqueeze(-1) + beta.unsqueeze(-1).unsqueeze(-1)
114
+
115
+ # This is the DRB implementation, and it does not exist in MUNIT.
116
+ class DRB(nn.Module):
117
+ def __init__(self, n_resblocks, dim, norm, activation, pad_type):
118
+ super().__init__()
119
+ self.common_features = []
120
+ self.physical_features = []
121
+ self.real_features = []
122
+ self.continuous_features = nn.ModuleList()
123
+
124
+ for i in range(0, n_resblocks):
125
+ self.common_features += [ResBlock(dim, norm=norm, activation=activation, pad_type=pad_type)]
126
+ for i in range(0, n_resblocks):
127
+ self.physical_features += [ResBlock(dim, norm=norm, activation=activation, pad_type=pad_type)]
128
+ for i in range(0, n_resblocks):
129
+ self.real_features += [ResBlock(dim, norm=norm, activation=activation, pad_type=pad_type)]
130
+ for i in range(0, n_resblocks):
131
+ self.continuous_features.append(ResBlockContinuous(dim, norm='fin', activation=activation, pad_type=pad_type))
132
+
133
+ self.common_features = nn.Sequential(*self.common_features)
134
+ self.physical_features = nn.Sequential(*self.physical_features)
135
+ self.real_features = nn.Sequential(*self.real_features)
136
+
137
+ def forward(self, input_features, continuity_cos, continuity_sin):
138
+ common_features = self.common_features(input_features)
139
+ physical_features = self.physical_features(input_features)
140
+ real_features = self.real_features(input_features)
141
+ continuous_features = input_features
142
+ for layer in self.continuous_features:
143
+ continuous_features = layer(continuous_features, continuity_cos, continuity_sin)
144
+
145
+ physical_output_features = common_features + physical_features + continuous_features + input_features
146
+ real_output_features = common_features + real_features + continuous_features + input_features
147
+
148
+ return real_output_features, physical_output_features
149
+
150
+ # Again, the default decoder is with adain, but we separated the two.
151
+ class DecoderNoAdain(nn.Module):
152
+ def __init__(self, n_upsample, n_res, dim, output_dim, res_norm='adain', activ='relu', pad_type='zero'):
153
+ super(DecoderNoAdain, self).__init__()
154
+
155
+ self.model = []
156
+ # upsampling blocks
157
+ for i in range(n_upsample):
158
+ self.model += [nn.Upsample(scale_factor=2),
159
+ Conv2dBlock(dim, dim // 2, 5, 1, 2, norm='layer', activation=activ, pad_type=pad_type)]
160
+ dim //= 2
161
+ # use reflection padding in the last conv layer
162
+ self.model += [Conv2dBlock(dim, output_dim, 7, 1, 3, norm='none', activation='tanh', pad_type=pad_type)]
163
+ self.model = nn.Sequential(*self.model)
164
+
165
+ def forward(self, x):
166
+ return self.model(x)
167
+
168
+ # This is a residual block with FIN layers inserted.
169
+ class ResBlockContinuous(nn.Module):
170
+ def __init__(self, dim, norm='instance', activation='relu', pad_type='zero'):
171
+ super(ResBlockContinuous, self).__init__()
172
+
173
+ self.model = nn.ModuleList()
174
+ self.model.append(Conv2dBlockContinuous(dim ,dim, 3, 1, 1, norm='fin', activation=activation, pad_type=pad_type))
175
+ self.model.append(Conv2dBlockContinuous(dim ,dim, 3, 1, 1, norm='fin', activation='none', pad_type=pad_type))
176
+
177
+ def forward(self, x, cos_phi, sin_phi):
178
+ residual = x
179
+ for layer in self.model:
180
+ x = layer(x, cos_phi, sin_phi)
181
+
182
+ x += residual
183
+ return x
184
+
185
+ # This is a convolutional block+nonlinear+norm with support for FIN layers as normalization strategy.
186
+ class Conv2dBlockContinuous(nn.Module):
187
+ def __init__(self, input_dim ,output_dim, kernel_size, stride,
188
+ padding=0, norm='none', activation='relu', pad_type='zero'):
189
+ super(Conv2dBlockContinuous, self).__init__()
190
+ self.use_bias = True
191
+ # initialize padding
192
+ if pad_type == 'reflect':
193
+ self.pad = nn.ReflectionPad2d(padding)
194
+ elif pad_type == 'replicate':
195
+ self.pad = nn.ReplicationPad2d(padding)
196
+ elif pad_type == 'zero':
197
+ self.pad = nn.ZeroPad2d(padding)
198
+ else:
199
+ assert 0, "Unsupported padding type: {}".format(pad_type)
200
+
201
+ # initialize normalization
202
+ norm_dim = output_dim
203
+ if norm == 'batch':
204
+ self.norm = nn.BatchNorm2d(norm_dim)
205
+ elif norm == 'instance':
206
+ #self.norm = nn.InstanceNorm2d(norm_dim, track_running_stats=True)
207
+ self.norm = nn.InstanceNorm2d(norm_dim)
208
+ elif norm == 'layer':
209
+ self.norm = LayerNorm(norm_dim)
210
+ elif norm == 'adain':
211
+ self.norm = AdaptiveInstanceNorm2d(norm_dim)
212
+ elif norm == 'fin':
213
+ self.norm = FIN2dCyclic(norm_dim)
214
+ elif norm == 'none' or norm == 'spectral':
215
+ self.norm = None
216
+ else:
217
+ assert 0, "Unsupported normalization: {}".format(norm)
218
+
219
+ # initialize activation
220
+ if activation == 'relu':
221
+ self.activation = nn.ReLU(inplace=True)
222
+ elif activation == 'lrelu':
223
+ self.activation = nn.LeakyReLU(0.2, inplace=True)
224
+ elif activation == 'prelu':
225
+ self.activation = nn.PReLU()
226
+ elif activation == 'selu':
227
+ self.activation = nn.SELU(inplace=True)
228
+ elif activation == 'tanh':
229
+ self.activation = nn.Tanh()
230
+ elif activation == 'none':
231
+ self.activation = None
232
+ else:
233
+ assert 0, "Unsupported activation: {}".format(activation)
234
+
235
+ # initialize convolution
236
+ if norm == 'spectral':
237
+ self.conv = SpectralNorm(nn.Conv2d(input_dim, output_dim, kernel_size, stride, bias=self.use_bias))
238
+ else:
239
+ self.conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride, bias=self.use_bias)
240
+
241
+ def forward(self, x, continuity_cos, continuity_sin):
242
+ x = self.conv(self.pad(x))
243
+ if self.norm:
244
+ x = self.norm(x, continuity_cos, continuity_sin)
245
+ if self.activation:
246
+ x = self.activation(x)
247
+ return x
248
+
249
+
250
+
251
+ ##################################################################################
252
+ # All below there are MUNIT default blocks.
253
+ ##################################################################################
254
+ class ResBlocks(nn.Module):
255
+ def __init__(self, num_blocks, dim, norm='instance', activation='relu', pad_type='zero'):
256
+ super(ResBlocks, self).__init__()
257
+ self.model = []
258
+ for i in range(num_blocks):
259
+ self.model += [ResBlock(dim, norm=norm, activation=activation, pad_type=pad_type)]
260
+ self.model = nn.Sequential(*self.model)
261
+
262
+ def forward(self, x):
263
+ return self.model(x)
264
+
265
+ class MLP(nn.Module):
266
+ def __init__(self, input_dim, output_dim, dim, n_blk, norm='none', activ='relu'):
267
+
268
+ super(MLP, self).__init__()
269
+ self.model = []
270
+ self.model += [LinearBlock(input_dim, dim, norm=norm, activation=activ)]
271
+ for i in range(n_blk - 2):
272
+ self.model += [LinearBlock(dim, dim, norm=norm, activation=activ)]
273
+ self.model += [LinearBlock(dim, output_dim, norm='none', activation='none')] # no output activations
274
+ self.model = nn.Sequential(*self.model)
275
+
276
+ def forward(self, x):
277
+ return self.model(x.view(x.size(0), -1))
278
+
279
+
280
+
281
+ class ResBlock(nn.Module):
282
+ def __init__(self, dim, norm='instance', activation='relu', pad_type='zero'):
283
+ super(ResBlock, self).__init__()
284
+
285
+ model = []
286
+ model += [Conv2dBlock(dim ,dim, 3, 1, 1, norm=norm, activation=activation, pad_type=pad_type)]
287
+ model += [Conv2dBlock(dim ,dim, 3, 1, 1, norm=norm, activation='none', pad_type=pad_type)]
288
+ self.model = nn.Sequential(*model)
289
+
290
+ def forward(self, x):
291
+ residual = x
292
+ out = self.model(x)
293
+ out += residual
294
+ return out
295
+
296
+ class Conv2dBlock(nn.Module):
297
+ def __init__(self, input_dim ,output_dim, kernel_size, stride,
298
+ padding=0, norm='none', activation='relu', pad_type='zero'):
299
+ super(Conv2dBlock, self).__init__()
300
+ self.use_bias = True
301
+ # initialize padding
302
+ if pad_type == 'reflect':
303
+ self.pad = nn.ReflectionPad2d(padding)
304
+ elif pad_type == 'replicate':
305
+ self.pad = nn.ReplicationPad2d(padding)
306
+ elif pad_type == 'zero':
307
+ self.pad = nn.ZeroPad2d(padding)
308
+ else:
309
+ assert 0, "Unsupported padding type: {}".format(pad_type)
310
+
311
+ # initialize normalization
312
+ norm_dim = output_dim
313
+ if norm == 'batch':
314
+ self.norm = nn.BatchNorm2d(norm_dim)
315
+ elif norm == 'instance':
316
+ #self.norm = nn.InstanceNorm2d(norm_dim, track_running_stats=True)
317
+ self.norm = nn.InstanceNorm2d(norm_dim)
318
+ elif norm == 'layer':
319
+ self.norm = LayerNorm(norm_dim)
320
+ elif norm == 'adain':
321
+ self.norm = AdaptiveInstanceNorm2d(norm_dim)
322
+ elif norm == 'none' or norm == 'spectral':
323
+ self.norm = None
324
+ else:
325
+ assert 0, "Unsupported normalization: {}".format(norm)
326
+
327
+ # initialize activation
328
+ if activation == 'relu':
329
+ self.activation = nn.ReLU(inplace=True)
330
+ elif activation == 'lrelu':
331
+ self.activation = nn.LeakyReLU(0.2, inplace=True)
332
+ elif activation == 'prelu':
333
+ self.activation = nn.PReLU()
334
+ elif activation == 'selu':
335
+ self.activation = nn.SELU(inplace=True)
336
+ elif activation == 'tanh':
337
+ self.activation = nn.Tanh()
338
+ elif activation == 'none':
339
+ self.activation = None
340
+ else:
341
+ assert 0, "Unsupported activation: {}".format(activation)
342
+
343
+ # initialize convolution
344
+ if norm == 'spectral':
345
+ self.conv = SpectralNorm(nn.Conv2d(input_dim, output_dim, kernel_size, stride, bias=self.use_bias))
346
+ else:
347
+ self.conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride, bias=self.use_bias)
348
+
349
+ def forward(self, x):
350
+ x = self.conv(self.pad(x))
351
+ if self.norm:
352
+ x = self.norm(x)
353
+ if self.activation:
354
+ x = self.activation(x)
355
+ return x
356
+
357
+
358
+ class LinearBlock(nn.Module):
359
+ def __init__(self, input_dim, output_dim, norm='none', activation='relu'):
360
+ super(LinearBlock, self).__init__()
361
+ use_bias = True
362
+ # initialize fully connected layer
363
+ if norm == 'spectral':
364
+ self.fc = SpectralNorm(nn.Linear(input_dim, output_dim, bias=use_bias))
365
+ else:
366
+ self.fc = nn.Linear(input_dim, output_dim, bias=use_bias)
367
+
368
+ # initialize normalization
369
+ norm_dim = output_dim
370
+ if norm == 'batch':
371
+ self.norm = nn.BatchNorm1d(norm_dim)
372
+ elif norm == 'instance':
373
+ self.norm = nn.InstanceNorm1d(norm_dim)
374
+ elif norm == 'layer':
375
+ self.norm = LayerNorm(norm_dim)
376
+ elif norm == 'none' or norm == 'spectral':
377
+ self.norm = None
378
+ else:
379
+ assert 0, "Unsupported normalization: {}".format(norm)
380
+
381
+ # initialize activation
382
+ if activation == 'relu':
383
+ self.activation = nn.ReLU(inplace=True)
384
+ elif activation == 'lrelu':
385
+ self.activation = nn.LeakyReLU(0.2, inplace=True)
386
+ elif activation == 'prelu':
387
+ self.activation = nn.PReLU()
388
+ elif activation == 'selu':
389
+ self.activation = nn.SELU(inplace=True)
390
+ elif activation == 'tanh':
391
+ self.activation = nn.Tanh()
392
+ elif activation == 'none':
393
+ self.activation = None
394
+ else:
395
+ assert 0, "Unsupported activation: {}".format(activation)
396
+
397
+ def forward(self, x):
398
+ out = self.fc(x)
399
+ if self.norm:
400
+ out = self.norm(out)
401
+ if self.activation:
402
+ out = self.activation(out)
403
+ return out
404
+
405
+
406
+ class Vgg16(nn.Module):
407
+ def __init__(self):
408
+ super(Vgg16, self).__init__()
409
+ self.conv1_1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1)
410
+ self.conv1_2 = nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1)
411
+
412
+ self.conv2_1 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1)
413
+ self.conv2_2 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
414
+
415
+ self.conv3_1 = nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1)
416
+ self.conv3_2 = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1)
417
+ self.conv3_3 = nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1)
418
+
419
+ self.conv4_1 = nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1)
420
+ self.conv4_2 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
421
+ self.conv4_3 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
422
+
423
+ self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
424
+ self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
425
+ self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1)
426
+
427
+ def forward(self, X):
428
+ h = F.relu(self.conv1_1(X), inplace=True)
429
+ h = F.relu(self.conv1_2(h), inplace=True)
430
+ # relu1_2 = h
431
+ h = F.max_pool2d(h, kernel_size=2, stride=2)
432
+
433
+ h = F.relu(self.conv2_1(h), inplace=True)
434
+ h = F.relu(self.conv2_2(h), inplace=True)
435
+ # relu2_2 = h
436
+ h = F.max_pool2d(h, kernel_size=2, stride=2)
437
+
438
+ h = F.relu(self.conv3_1(h), inplace=True)
439
+ h = F.relu(self.conv3_2(h), inplace=True)
440
+ h = F.relu(self.conv3_3(h), inplace=True)
441
+ # relu3_3 = h
442
+ h = F.max_pool2d(h, kernel_size=2, stride=2)
443
+
444
+ h = F.relu(self.conv4_1(h), inplace=True)
445
+ h = F.relu(self.conv4_2(h), inplace=True)
446
+ h = F.relu(self.conv4_3(h), inplace=True)
447
+ # relu4_3 = h
448
+
449
+ h = F.relu(self.conv5_1(h), inplace=True)
450
+ h = F.relu(self.conv5_2(h), inplace=True)
451
+ h = F.relu(self.conv5_3(h), inplace=True)
452
+ relu5_3 = h
453
+
454
+ return relu5_3
455
+ # return [relu1_2, relu2_2, relu3_3, relu4_3]
456
+
457
+
458
+ class AdaptiveInstanceNorm2d(nn.Module):
459
+ def __init__(self, num_features, eps=1e-5, momentum=0.1):
460
+ super(AdaptiveInstanceNorm2d, self).__init__()
461
+ self.num_features = num_features
462
+ self.eps = eps
463
+ self.momentum = momentum
464
+ # weight and bias are dynamically assigned
465
+ self.weight = None
466
+ self.bias = None
467
+ # just dummy buffers, not used
468
+ self.register_buffer('running_mean', torch.zeros(num_features))
469
+ self.register_buffer('running_var', torch.ones(num_features))
470
+
471
+ def forward(self, x):
472
+ assert self.weight is not None and self.bias is not None, "Please assign weight and bias before calling AdaIN!"
473
+ b, c = x.size(0), x.size(1)
474
+
475
+ if self.weight.type() == 'torch.cuda.HalfTensor':
476
+ running_mean = self.running_mean.repeat(b).to(torch.float16)
477
+ running_var = self.running_var.repeat(b).to(torch.float16)
478
+ else:
479
+ running_mean = self.running_mean.repeat(b)
480
+ running_var = self.running_var.repeat(b)
481
+
482
+ # Apply instance norm
483
+ x_reshaped = x.contiguous().view(1, b * c, *x.size()[2:])
484
+
485
+ out = F.batch_norm(
486
+ x_reshaped, running_mean, running_var, self.weight, self.bias,
487
+ True, self.momentum, self.eps)
488
+
489
+ return out.view(b, c, *x.size()[2:])
490
+
491
+ def __repr__(self):
492
+ return self.__class__.__name__ + '(' + str(self.num_features) + ')'
493
+
494
+
495
+ class LayerNorm(nn.Module):
496
+ def __init__(self, num_features, eps=1e-5, affine=True):
497
+ super(LayerNorm, self).__init__()
498
+ self.num_features = num_features
499
+ self.affine = affine
500
+ self.eps = eps
501
+
502
+ if self.affine:
503
+ self.gamma = nn.Parameter(torch.Tensor(num_features).uniform_())
504
+ self.beta = nn.Parameter(torch.zeros(num_features))
505
+
506
+ def forward(self, x):
507
+ shape = [-1] + [1] * (x.dim() - 1)
508
+ # print(x.size())
509
+ if x.size(0) == 1:
510
+ # These two lines run much faster in pytorch 0.4 than the two lines listed below.
511
+ mean = x.view(-1).mean().view(*shape)
512
+ std = x.view(-1).std().view(*shape)
513
+ else:
514
+ mean = x.view(x.size(0), -1).mean(1).view(*shape)
515
+ std = x.view(x.size(0), -1).std(1).view(*shape)
516
+
517
+ x = (x - mean) / (std + self.eps)
518
+
519
+ if self.affine:
520
+ shape = [1, -1] + [1] * (x.dim() - 2)
521
+ x = x * self.gamma.view(*shape) + self.beta.view(*shape)
522
+ return x
523
+
524
+ def l2normalize(v, eps=1e-12):
525
+ return v / (v.norm() + eps)
526
+
527
+
528
+ class SpectralNorm(nn.Module):
529
+ """
530
+ Based on the paper "Spectral Normalization for Generative Adversarial Networks" by Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida
531
+ and the Pytorch implementation https://github.com/christiancosgrove/pytorch-spectral-normalization-gan
532
+ """
533
+ def __init__(self, module, name='weight', power_iterations=1):
534
+ super(SpectralNorm, self).__init__()
535
+ self.module = module
536
+ self.name = name
537
+ self.power_iterations = power_iterations
538
+ if not self._made_params():
539
+ self._make_params()
540
+
541
+ def _update_u_v(self):
542
+ u = getattr(self.module, self.name + "_u")
543
+ v = getattr(self.module, self.name + "_v")
544
+ w = getattr(self.module, self.name + "_bar")
545
+
546
+ height = w.data.shape[0]
547
+ for _ in range(self.power_iterations):
548
+ v.data = l2normalize(torch.mv(torch.t(w.view(height,-1).data), u.data))
549
+ u.data = l2normalize(torch.mv(w.view(height,-1).data, v.data))
550
+
551
+ # sigma = torch.dot(u.data, torch.mv(w.view(height,-1).data, v.data))
552
+ sigma = u.dot(w.view(height, -1).mv(v))
553
+ setattr(self.module, self.name, w / sigma.expand_as(w))
554
+
555
+ def _made_params(self):
556
+ try:
557
+ u = getattr(self.module, self.name + "_u")
558
+ v = getattr(self.module, self.name + "_v")
559
+ w = getattr(self.module, self.name + "_bar")
560
+ return True
561
+ except AttributeError:
562
+ return False
563
+
564
+
565
+ def _make_params(self):
566
+ w = getattr(self.module, self.name)
567
+
568
+ height = w.data.shape[0]
569
+ width = w.view(height, -1).data.shape[1]
570
+
571
+ u = nn.Parameter(w.data.new(height).normal_(0, 1), requires_grad=False)
572
+ v = nn.Parameter(w.data.new(width).normal_(0, 1), requires_grad=False)
573
+ u.data = l2normalize(u.data)
574
+ v.data = l2normalize(v.data)
575
+ w_bar = nn.Parameter(w.data)
576
+
577
+ del self.module._parameters[self.name]
578
+
579
+ self.module.register_parameter(self.name + "_u", u)
580
+ self.module.register_parameter(self.name + "_v", v)
581
+ self.module.register_parameter(self.name + "_bar", w_bar)
582
+
583
+
584
+ def forward(self, *args):
585
+ self._update_u_v()
586
+ return self.module.forward(*args)
587
+
588
+ class MsImageDis(nn.Module):
589
+ # Multi-scale discriminator architecture
590
+ def __init__(self, input_dim, n_layer, gan_type, dim, norm, activ, num_scales, pad_type, output_channels = 1, final_function = None):
591
+ super(MsImageDis, self).__init__()
592
+ self.n_layer = n_layer
593
+ self.gan_type = gan_type
594
+ self.output_channels = output_channels
595
+ self.dim = dim
596
+ self.norm = norm
597
+ self.activ = activ
598
+ self.num_scales = num_scales
599
+ self.pad_type = pad_type
600
+ self.input_dim = input_dim
601
+ self.downsample = nn.AvgPool2d(3, stride=2, padding=[1, 1], count_include_pad=False)
602
+ self.cnns = nn.ModuleList()
603
+ self.final_function = final_function
604
+ for _ in range(self.num_scales):
605
+ self.cnns.append(self._make_net())
606
+
607
+ def _make_net(self):
608
+ dim = self.dim
609
+ cnn_x = []
610
+ cnn_x += [Conv2dBlock(self.input_dim, dim, 4, 2, 1, norm='none', activation=self.activ, pad_type=self.pad_type)]
611
+ for i in range(self.n_layer - 1):
612
+ cnn_x += [Conv2dBlock(dim, dim * 2, 4, 2, 1, norm=self.norm, activation=self.activ, pad_type=self.pad_type)]
613
+ dim *= 2
614
+ cnn_x += [nn.Conv2d(dim, self.output_channels, 1, 1, 0)]
615
+ cnn_x = nn.Sequential(*cnn_x)
616
+ return cnn_x
617
+
618
+ def forward(self, x):
619
+ outputs = []
620
+ for model in self.cnns:
621
+ output = model(x)
622
+ if self.final_function is not None:
623
+ output = self.final_function(output)
624
+ outputs.append(output)
625
+ x = self.downsample(x)
626
+ return outputs
627
+
628
+ def calc_dis_loss(self, input_fake, input_real):
629
+ # calculate the loss to train D
630
+ outs0 = self.forward(input_fake)
631
+ outs1 = self.forward(input_real)
632
+ loss = 0
633
+
634
+ for it, (out0, out1) in enumerate(zip(outs0, outs1)):
635
+ if self.gan_type == 'lsgan':
636
+ loss += torch.mean((out0 - 0)**2) + torch.mean((out1 - 1)**2)
637
+ elif self.gan_type == 'nsgan':
638
+ all0 = torch.zeros_like(out0)
639
+ all1 = torch.ones_like(out1)
640
+ loss += torch.mean(F.binary_cross_entropy(F.sigmoid(out0), all0) +
641
+ F.binary_cross_entropy(F.sigmoid(out1), all1))
642
+ else:
643
+ assert 0, "Unsupported GAN type: {}".format(self.gan_type)
644
+ return loss
645
+
646
+ def calc_gen_loss(self, input_fake):
647
+ # calculate the loss to train G
648
+ outs0 = self.forward(input_fake)
649
+ loss = 0
650
+ for it, (out0) in enumerate(outs0):
651
+ if self.gan_type == 'lsgan':
652
+ loss += torch.mean((out0 - 1)**2) # LSGAN
653
+ elif self.gan_type == 'nsgan':
654
+ all1 = torch.ones_like(out0.data)
655
+ loss += torch.mean(F.binary_cross_entropy(F.sigmoid(out0), all1))
656
+ else:
657
+ assert 0, "Unsupported GAN type: {}".format(self.gan_type)
658
+ return loss
659
+
660
+ class StyleEncoder(nn.Module):
661
+ def __init__(self, n_downsample, input_dim, dim, style_dim, norm, activ, pad_type):
662
+ super(StyleEncoder, self).__init__()
663
+ self.model = []
664
+ self.model += [Conv2dBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type=pad_type)]
665
+ for i in range(2):
666
+ self.model += [Conv2dBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)]
667
+ dim *= 2
668
+ for i in range(n_downsample - 2):
669
+ self.model += [Conv2dBlock(dim, dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)]
670
+ self.model += [nn.AdaptiveAvgPool2d(1)] # global average pooling
671
+ self.model += [nn.Conv2d(dim, style_dim, 1, 1, 0)]
672
+ self.model = nn.Sequential(*self.model)
673
+ self.output_dim = dim
674
+
675
+ def forward(self, x):
676
+ return self.model(x)
677
+
678
+ class ContentEncoder(nn.Module):
679
+ def __init__(self, n_downsample, n_res, input_dim, dim, norm, activ, pad_type):
680
+ super(ContentEncoder, self).__init__()
681
+ self.model = []
682
+ self.model += [Conv2dBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type=pad_type)]
683
+ # downsampling blocks
684
+ for i in range(n_downsample):
685
+ self.model += [Conv2dBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)]
686
+ dim *= 2
687
+ # residual blocks
688
+ self.model += [ResBlocks(n_res, dim, norm=norm, activation=activ, pad_type=pad_type)]
689
+ self.model = nn.Sequential(*self.model)
690
+ self.output_dim = dim
691
+
692
+ def forward(self, x):
693
+ return self.model(x)
694
+
695
+ class AdaINBlock(nn.Module):
696
+ def __init__(self, n_upsample, n_res, dim, output_dim, res_norm='adain', activ='relu', pad_type='zero'):
697
+ super(AdaINBlock, self).__init__()
698
+
699
+ self.model = []
700
+ # AdaIN residual blocks
701
+ self.model += [ResBlocks(n_res, dim, res_norm, activ, pad_type=pad_type)]
702
+ self.model = nn.Sequential(*self.model)
703
+
704
+ def forward(self, x):
705
+ return self.model(x)
706
+
networks/backbones/functions.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ functions.py
3
+ Here we get helper functions to 1) get schedulers given an option 2) initialize the network weights.
4
+ """
5
+
6
+ import torch
7
+ from torch.nn import init
8
+ from torch.optim import lr_scheduler
9
+
10
+ ###############################################################################
11
+ # Helper Functions
12
+ ###############################################################################
13
+
14
+ def get_scheduler(optimizer, opt):
15
+ """Return a learning rate scheduler
16
+
17
+ Parameters:
18
+ optimizer -- the optimizer of the network
19
+ opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions. 
20
+ opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine
21
+
22
+ For 'linear', we keep the same learning rate for the first <opt.n_epochs> epochs
23
+ and linearly decay the rate to zero over the next <opt.n_epochs_decay> epochs.
24
+ For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers.
25
+ See https://pytorch.org/docs/stable/optim.html for more details.
26
+ """
27
+ if opt.lr_policy == 'linear':
28
+ def lambda_rule(iteration):
29
+ lr_l = 1.0 - max(0, logger.get_global_step() - opt.static_iters) / float(opt.decay_iters + 1)
30
+ return lr_l
31
+ scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)
32
+ elif opt.lr_policy == 'step':
33
+ scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.decay_iters_step, gamma=0.1)
34
+ elif opt.lr_policy == 'plateau':
35
+ scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)
36
+ elif opt.lr_policy == 'cosine':
37
+ scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0)
38
+ else:
39
+ return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)
40
+ return scheduler
41
+
42
+
43
+ def init_weights(net, init_type='normal', init_gain=0.02):
44
+ """Initialize network weights.
45
+
46
+ Parameters:
47
+ net (network) -- network to be initialized
48
+ init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
49
+ init_gain (float) -- scaling factor for normal, xavier and orthogonal.
50
+
51
+ We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might
52
+ work better for some applications. Feel free to try yourself.
53
+ """
54
+ def init_func(m): # define the initialization function
55
+ classname = m.__class__.__name__
56
+ if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
57
+ if init_type == 'normal':
58
+ init.normal_(m.weight.data, 0.0, init_gain)
59
+ elif init_type == 'xavier':
60
+ init.xavier_normal_(m.weight.data, gain=init_gain)
61
+ elif init_type == 'kaiming':
62
+ init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
63
+ elif init_type == 'orthogonal':
64
+ init.orthogonal_(m.weight.data, gain=init_gain)
65
+ else:
66
+ raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
67
+ if hasattr(m, 'bias') and m.bias is not None:
68
+ init.constant_(m.bias.data, 0.0)
69
+ elif classname.find('BatchNorm2d') != -1: # BatchNorm Layer's weight is not a matrix; only normal distribution applies.
70
+ init.normal_(m.weight.data, 1.0, init_gain)
71
+ init.constant_(m.bias.data, 0.0)
72
+
73
+ net.apply(init_func) # apply the initialization function <init_func>
74
+
75
+
76
+ def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=[]):
77
+ """Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights
78
+ Parameters:
79
+ net (network) -- the network to be initialized
80
+ init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal
81
+ gain (float) -- scaling factor for normal, xavier and orthogonal.
82
+ gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2
83
+
84
+ Return an initialized network.
85
+ """
86
+ init_weights(net, init_type, init_gain=init_gain)
87
+ return net
networks/base_model.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ base_model.py
3
+ Abstract definition of a model, where helper functions as image extraction and gradient propagation are defined.
4
+ """
5
+
6
+ from collections import OrderedDict
7
+ from abc import abstractmethod
8
+
9
+ import pytorch_lightning as pl
10
+ from torch.optim import lr_scheduler
11
+
12
+ from torchvision.transforms import ToPILImage
13
+
14
+ class BaseModel(pl.LightningModule):
15
+
16
+ def __init__(self, opt):
17
+ super().__init__()
18
+ self.opt = opt
19
+ self.gpu_ids = opt.gpu_ids
20
+ self.loss_names = []
21
+ self.model_names = []
22
+ self.visual_names = []
23
+ self.image_paths = []
24
+ self.save_hyperparameters()
25
+ self.schedulers = []
26
+ self.metric = 0 # used for learning rate policy 'plateau'
27
+
28
+ @abstractmethod
29
+ def set_input(self, input):
30
+ pass
31
+
32
+ def eval(self):
33
+ for name in self.model_names:
34
+ if isinstance(name, str):
35
+ net = getattr(self, 'net' + name)
36
+ net.eval()
37
+
38
+ def compute_visuals(self):
39
+ pass
40
+
41
+ def get_image_paths(self):
42
+ return self.image_paths
43
+
44
+ def update_learning_rate(self):
45
+ for scheduler in self.schedulers:
46
+ if self.opt.lr_policy == 'plateau':
47
+ scheduler.step(self.metric)
48
+ else:
49
+ scheduler.step()
50
+
51
+ lr = self.optimizers[0].param_groups[0]['lr']
52
+ return lr
53
+
54
+ def get_current_visuals(self):
55
+ visual_ret = OrderedDict()
56
+ for name in self.visual_names:
57
+ if isinstance(name, str):
58
+ visual_ret[name] = (getattr(self, name).detach() + 1) / 2
59
+ return visual_ret
60
+
61
+ def log_current_losses(self):
62
+ losses = '\n'
63
+ for name in self.loss_names:
64
+ if isinstance(name, str):
65
+ loss_value = float(getattr(self, 'loss_' + name))
66
+ self.logger.log_metrics({'loss_{}'.format(name): loss_value}, self.trainer.global_step)
67
+ losses += 'loss_{}={:.4f}\t'.format(name, loss_value)
68
+ print(losses)
69
+
70
+ def log_current_visuals(self):
71
+ visuals = self.get_current_visuals()
72
+ for key, viz in visuals.items():
73
+ self.logger.experiment.add_image('img_{}'.format(key), viz[0].cpu(), self.trainer.global_step)
74
+
75
+ def get_scheduler(self, opt, optimizer):
76
+ if opt.lr_policy == 'linear':
77
+ def lambda_rule(iter):
78
+ lr_l = 1.0 - max(0, self.trainer.global_step - opt.static_iters) / float(opt.decay_iters + 1)
79
+ return lr_l
80
+
81
+ scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)
82
+ elif opt.lr_policy == 'step':
83
+ scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.decay_iters_step, gamma=0.5)
84
+ elif opt.lr_policy == 'plateau':
85
+ scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)
86
+ elif opt.lr_policy == 'cosine':
87
+ scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0)
88
+ else:
89
+ return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)
90
+ return scheduler
91
+
92
+ def print_networks(self):
93
+ for name in self.model_names:
94
+ if isinstance(name, str):
95
+ net = getattr(self, 'net' + name)
96
+ num_params = 0
97
+ for param in net.parameters():
98
+ num_params += param.numel()
99
+ print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6))
100
+
101
+ def get_optimizer_dict(self):
102
+ return_dict = {}
103
+ for index, opt in enumerate(self.optimizers):
104
+ return_dict['Optimizer_{}'.format(index)] = opt
105
+ return return_dict
106
+
107
+ def set_requires_grad(self, nets, requires_grad=False):
108
+ if not isinstance(nets, list):
109
+ nets = [nets]
110
+ for net in nets:
111
+ if net is not None:
112
+ for param in net.parameters():
113
+ param.requires_grad = requires_grad
networks/comomunit_model.py ADDED
@@ -0,0 +1,396 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ continuous_munit_cyclepoint_residual.py
3
+ This is CoMo-MUNIT *logic*, so how the network is trained.
4
+ """
5
+
6
+ import math
7
+ import torch
8
+ import itertools
9
+ from .base_model import BaseModel
10
+ from .backbones import comomunit as networks
11
+ import random
12
+ import munch
13
+
14
+
15
+ def ModelOptions():
16
+ mo = munch.Munch()
17
+ # Generator
18
+ mo.gen_dim = 64
19
+ mo.style_dim = 8
20
+ mo.gen_activ = 'relu'
21
+ mo.n_downsample = 2
22
+ mo.n_res = 4
23
+ mo.gen_pad_type = 'reflect'
24
+ mo.mlp_dim = 256
25
+
26
+ # Discriminiator
27
+ mo.disc_dim = 64
28
+ mo.disc_norm = 'none'
29
+ mo.disc_activ = 'lrelu'
30
+ mo.disc_n_layer = 4
31
+ mo.num_scales = 3 # TODO change for other experiments!
32
+ mo.disc_pad_type = 'reflect'
33
+
34
+ # Initialization
35
+ mo.init_type_gen = 'kaiming'
36
+ mo.init_type_disc = 'normal'
37
+ mo.init_gain = 0.02
38
+
39
+ # Weights
40
+ mo.lambda_gan = 1
41
+ mo.lambda_rec_image = 10
42
+ mo.lambda_rec_style = 1
43
+ mo.lambda_rec_content = 1
44
+ mo.lambda_rec_cycle = 10
45
+ mo.lambda_vgg = 0.1
46
+ mo.lambda_idt = 1
47
+ mo.lambda_Phinet_A = 1
48
+ # Continuous settings
49
+ mo.resblocks_cont = 1
50
+ mo.lambda_physics = 10
51
+ mo.lambda_compare = 10
52
+ mo.lambda_physics_compare = 1
53
+
54
+ return mo
55
+
56
+
57
+ class CoMoMUNITModel(BaseModel):
58
+
59
+ def __init__(self, opt):
60
+ BaseModel.__init__(self, opt)
61
+ # specify the training losses you want to print out. The training/test scripts will call <BaseModel.get_current_losses>
62
+ self.loss_names = ['D_A', 'G_A', 'cycle_A', 'rec_A', 'rec_style_B', 'rec_content_A', 'vgg_A', 'phi_net_A',
63
+ 'D_B', 'G_B', 'cycle_B', 'rec_B', 'rec_style_A', 'rec_content_B', 'vgg_B', 'idt_B',
64
+ 'recon_physics', 'phi_net']
65
+ # specify the images you want to save/display. The training/test scripts will call <BaseModel.get_current_visuals>
66
+ visual_names_A = ['x', 'y', 'rec_A_img', 'rec_A_cycle', 'y_M_tilde', 'y_M']
67
+ visual_names_B = ['y_tilde', 'fake_A', 'rec_B_img', 'rec_B_cycle', 'idt_B_img']
68
+
69
+ self.visual_names = visual_names_A + visual_names_B # combine visualizations for A and B
70
+ # specify the models you want to save to the disk. The training/test scripts will call <BaseModel.save_networks> and <BaseModel.load_networks>.
71
+ self.model_names = ['G_A', 'D_A', 'G_B', 'D_B', 'DRB', 'Phi_net', 'Phi_net_A']
72
+
73
+ self.netG_A = networks.define_G_munit(opt.input_nc, opt.output_nc, opt.gen_dim, opt.style_dim, opt.n_downsample,
74
+ opt.n_res, opt.gen_pad_type, opt.mlp_dim, opt.gen_activ, opt.init_type_gen,
75
+ opt.init_gain, self.gpu_ids)
76
+ self.netG_B = networks.define_G_munit(opt.output_nc, opt.input_nc, opt.gen_dim, opt.style_dim, opt.n_downsample,
77
+ opt.n_res, opt.gen_pad_type, opt.mlp_dim, opt.gen_activ, opt.init_type_gen,
78
+ opt.init_gain, self.gpu_ids)
79
+
80
+ self.netDRB = networks.define_DRB_munit(opt.resblocks_cont, opt.gen_dim * (2 ** opt.n_downsample), 'instance', opt.gen_activ,
81
+ opt.gen_pad_type, opt.init_type_gen, opt.init_gain, self.gpu_ids)
82
+ # define discriminators
83
+ self.netD_A = networks.define_D_munit(opt.output_nc, opt.disc_dim, opt.disc_norm, opt.disc_activ, opt.disc_n_layer,
84
+ opt.gan_mode, opt.num_scales, opt.disc_pad_type, opt.init_type_disc,
85
+ opt.init_gain, self.gpu_ids)
86
+
87
+ self.netD_B = networks.define_D_munit(opt.input_nc, opt.disc_dim, opt.disc_norm, opt.disc_activ, opt.disc_n_layer,
88
+ opt.gan_mode, opt.num_scales, opt.disc_pad_type, opt.init_type_disc,
89
+ opt.init_gain, self.gpu_ids)
90
+
91
+ # We use munit style encoder as phinet/phinet_A
92
+ self.netPhi_net = networks.init_net(networks.StyleEncoder(4, opt.input_nc * 2, opt.gen_dim, 2, norm='instance',
93
+ activ='lrelu', pad_type=opt.gen_pad_type), init_type=opt.init_type_gen,
94
+ init_gain = opt.init_gain, gpu_ids = opt.gpu_ids)
95
+
96
+ self.netPhi_net_A = networks.init_net(networks.StyleEncoder(4, opt.input_nc, opt.gen_dim, 1, norm='instance',
97
+ activ='lrelu', pad_type=opt.gen_pad_type), init_type=opt.init_type_gen,
98
+ init_gain = opt.init_gain, gpu_ids = opt.gpu_ids)
99
+
100
+ # define loss functions
101
+ self.reconCriterion = torch.nn.L1Loss()
102
+ self.criterionPhysics = torch.nn.L1Loss()
103
+ self.criterionIdt = torch.nn.L1Loss()
104
+
105
+ # initialize optimizers; schedulers will be automatically created by function <BaseModel.setup>.
106
+
107
+ if opt.lambda_vgg > 0:
108
+ self.instance_norm = torch.nn.InstanceNorm2d(512)
109
+ self.vgg = networks.Vgg16()
110
+ self.vgg.load_state_dict(torch.load('res/vgg_imagenet.pth'))
111
+ self.vgg.eval()
112
+ for param in self.vgg.parameters():
113
+ param.requires_grad = False
114
+
115
+ def configure_optimizers(self):
116
+ opt_G = torch.optim.Adam(itertools.chain(self.netG_A.parameters(), self.netG_B.parameters(),
117
+ self.netDRB.parameters(), self.netPhi_net.parameters(),
118
+ self.netPhi_net_A.parameters()),
119
+ weight_decay=0.0001, lr=self.opt.lr, betas=(self.opt.beta1, 0.999))
120
+ opt_D = torch.optim.Adam(itertools.chain(self.netD_A.parameters(), self.netD_B.parameters()),
121
+ weight_decay=0.0001, lr=self.opt.lr, betas=(self.opt.beta1, 0.999))
122
+
123
+ scheduler_G = self.get_scheduler(self.opt, opt_G)
124
+ scheduler_D = self.get_scheduler(self.opt, opt_D)
125
+ return [opt_D, opt_G], [scheduler_D, scheduler_G]
126
+
127
+ def set_input(self, input):
128
+ # Input image. everything is mixed so we only have one style
129
+ self.x = input['A']
130
+ # Paths just because maybe they are needed
131
+ self.image_paths = input['A_paths']
132
+ # Desired continuity value which is used to render self.y_M_tilde
133
+ # Desired continuity value which is used to render self.y_M_tilde
134
+ self.phi = input['phi'].float()
135
+ self.cos_phi = input['cos_phi'].float()
136
+ self.sin_phi = input['sin_phi'].float()
137
+ # Term used to train SSN
138
+ self.phi_prime = input['phi_prime'].float()
139
+ self.cos_phi_prime = input['cos_phi_prime'].float()
140
+ self.sin_phi_prime = input['sin_phi_prime'].float()
141
+ # physical model applied to self.x with continuity self.continuity
142
+ self.y_M_tilde = input['A_cont']
143
+ # physical model applied to self.x with continuity self.continuity_compare
144
+ self.y_M_tilde_prime = input['A_cont_compare']
145
+ # Other image, in reality the two will belong to the same domain
146
+ self.y_tilde = input['B']
147
+
148
+ def __vgg_preprocess(self, batch):
149
+ tensortype = type(batch)
150
+ (r, g, b) = torch.chunk(batch, 3, dim=1)
151
+ batch = torch.cat((b, g, r), dim=1) # convert RGB to BGR
152
+ batch = (batch + 1) * 255 * 0.5 # [-1, 1] -> [0, 255]
153
+ mean = tensortype(batch.data.size()).to(self.device)
154
+
155
+ mean[:, 0, :, :] = 103.939
156
+ mean[:, 1, :, :] = 116.779
157
+ mean[:, 2, :, :] = 123.680
158
+ batch = batch.sub(mean) # subtract mean
159
+ return batch
160
+
161
+ def __compute_vgg_loss(self, img, target):
162
+ img_vgg = self.__vgg_preprocess(img)
163
+ target_vgg = self.__vgg_preprocess(target)
164
+ img_fea = self.vgg(img_vgg)
165
+ target_fea = self.vgg(target_vgg)
166
+ return torch.mean((self.instance_norm(img_fea) - self.instance_norm(target_fea)) ** 2)
167
+
168
+
169
+ def forward(self, img, phi = None, style_B_fake = None):
170
+ """Run forward pass; called by both functions <optimize_parameters> and <test>."""
171
+ # Random style sampling
172
+ if style_B_fake is None:
173
+ style_B_fake = torch.randn(img.size(0), self.opt.style_dim, 1, 1).to(self.device)
174
+ if phi is None:
175
+ phi = torch.zeros(1).fill_(random.random()).to(self.device) * math.pi * 2
176
+
177
+ self.cos_phi = torch.cos(phi)
178
+ self.sin_phi = torch.sin(phi)
179
+
180
+ # Encoding
181
+ self.content_A, self.style_A_real = self.netG_A.encode(img)
182
+
183
+ features_A = self.netG_B.assign_adain(self.content_A, style_B_fake)
184
+ features_A_real, features_A_physics = self.netDRB(features_A, self.cos_phi, self.sin_phi)
185
+ fake_B = self.netG_B.decode(features_A_real)
186
+ return fake_B
187
+
188
+ def training_step_D(self):
189
+ with torch.no_grad():
190
+ # Random style sampling
191
+ self.style_A_fake = torch.randn(self.x.size(0), self.opt.style_dim, 1, 1).to(self.device)
192
+ self.style_B_fake = torch.randn(self.y_tilde.size(0), self.opt.style_dim, 1, 1).to(self.device)
193
+
194
+ self.content_A, self.style_A_real = self.netG_A.encode(self.x)
195
+ features_A = self.netG_B.assign_adain(self.content_A, self.style_B_fake)
196
+ features_A_real, features_A_physics = self.netDRB(features_A, self.cos_phi, self.sin_phi)
197
+ self.y = self.netG_B.decode(features_A_real)
198
+
199
+ # Encoding
200
+ self.content_B, self.style_B_real = self.netG_B.encode(self.y_tilde)
201
+ features_B = self.netG_A.assign_adain(self.content_B, self.style_A_fake)
202
+ features_B_real, _ = self.netDRB(features_B,
203
+ torch.ones(self.cos_phi.size()).to(self.device),
204
+ torch.zeros(self.sin_phi.size()).to(self.device)
205
+ )
206
+ self.fake_A = self.netG_A.decode(features_B_real)
207
+
208
+ self.loss_D_A = self.netD_A.calc_dis_loss(self.y, self.y_tilde) * self.opt.lambda_gan
209
+ self.loss_D_B = self.netD_B.calc_dis_loss(self.fake_A, self.x) * self.opt.lambda_gan
210
+
211
+ loss_D = self.loss_D_A + self.loss_D_B
212
+ return loss_D
213
+
214
+
215
+ def phi_loss_fn(self):
216
+ # the distance between the generated image and the image at the output of the
217
+ # physical model should be zero
218
+
219
+ input_zerodistance = torch.cat((self.y, self.y_M_tilde), dim = 1)
220
+
221
+ # Distance between generated image and other image of the physical model should be
222
+ # taken from the ground truth value
223
+ input_normaldistance = torch.cat((self.y, self.y_M_tilde_prime), dim = 1)
224
+
225
+ # same for this, but this does not depend on a GAN generation so it's used as a regularization term
226
+ input_regolarize = torch.cat((self.y_M_tilde, self.y_M_tilde_prime), dim = 1)
227
+ # essentailly, ground truth distance given by the physical model renderings
228
+ # Cosine distance, we are trying to encode cyclic stuff
229
+
230
+ distance_cos = (torch.cos(self.phi) - torch.cos(self.phi_prime)) / 2
231
+ distance_sin = (torch.sin(self.phi) - torch.sin(self.phi_prime)) / 2
232
+
233
+ # We evaluate the angle distance and we normalize it in -1/1
234
+ output_zerodistance = torch.tanh(self.netPhi_net(input_zerodistance))#[0])
235
+ output_normaldistance = torch.tanh(self.netPhi_net(input_normaldistance))#[0])
236
+ output_regolarize = torch.tanh(self.netPhi_net(input_regolarize))#[0])
237
+
238
+ loss_cos = torch.pow(output_zerodistance[:, 0] - 0, 2).mean()
239
+ loss_cos += torch.pow(output_normaldistance[:, 0] - distance_cos, 2).mean()
240
+ loss_cos += torch.pow(output_regolarize[:, 0] - distance_cos, 2).mean()
241
+
242
+ loss_sin = torch.pow(output_zerodistance[:, 1] - 0, 2).mean()
243
+ loss_sin += torch.pow(output_normaldistance[:, 1] - distance_sin, 2).mean()
244
+ loss_sin += torch.pow(output_regolarize[:, 1] - distance_sin, 2).mean()
245
+
246
+
247
+ # additional terms on the other image generated by the GAN, i.e. something that should resemble exactly
248
+ # the image generated by the physical model
249
+ # This terms follow the same reasoning as before and weighted differently
250
+ input_physics_zerodistance = torch.cat((self.y_M, self.y_M_tilde), dim = 1)
251
+ input_physics_regolarize = torch.cat((self.y_M, self.y_M_tilde_prime), dim = 1)
252
+ output_physics_zerodistance = torch.tanh(self.netPhi_net(input_physics_zerodistance))#[0])
253
+ output_physics_regolarize = torch.tanh(self.netPhi_net(input_physics_regolarize))#[0])
254
+
255
+ loss_cos += torch.pow(output_physics_zerodistance[:, 0] - 0, 2).mean() * self.opt.lambda_physics_compare
256
+ loss_cos += torch.pow(output_physics_regolarize[:, 0] - distance_cos,
257
+ 2).mean() * self.opt.lambda_physics_compare
258
+ loss_sin += torch.pow(output_physics_zerodistance[:, 1] - 0, 2).mean() * self.opt.lambda_physics_compare
259
+ loss_sin += torch.pow(output_physics_regolarize[:, 1] - distance_sin,
260
+ 2).mean() * self.opt.lambda_physics_compare
261
+
262
+ # Also distance between the two outputs of the gan should be 0
263
+ input_twoheads = torch.cat((self.y_M, self.y), dim = 1)
264
+ output_twoheads = torch.tanh(self.netPhi_net(input_twoheads))#[0])
265
+
266
+ loss_cos += torch.pow(output_twoheads[:, 0] - 0, 2).mean()
267
+ loss_sin += torch.pow(output_twoheads[:, 1] - 0, 2).mean()
268
+
269
+ loss = loss_cos + loss_sin * 0.5
270
+
271
+ return loss
272
+
273
+ def training_step_G(self):
274
+ self.style_B_fake = torch.randn(self.y_tilde.size(0), self.opt.style_dim, 1, 1).to(self.device)
275
+ self.style_A_fake = torch.randn(self.x.size(0), self.opt.style_dim, 1, 1).to(self.device)
276
+
277
+ self.content_A, self.style_A_real = self.netG_A.encode(self.x)
278
+ self.content_B, self.style_B_real = self.netG_B.encode(self.y_tilde)
279
+ self.phi_est = torch.sigmoid(self.netPhi_net_A.forward(self.y_tilde).view(self.y_tilde.size(0), -1)).view(self.y_tilde.size(0)) * 2 * math.pi
280
+ self.estimated_cos_B = torch.cos(self.phi_est)
281
+ self.estimated_sin_B = torch.sin(self.phi_est)
282
+
283
+ # Reconstruction
284
+ features_A_reconstruction = self.netG_A.assign_adain(self.content_A, self.style_A_real)
285
+ features_A_reconstruction, _ = self.netDRB(features_A_reconstruction,
286
+ torch.ones(self.estimated_cos_B.size()).to(self.device),
287
+ torch.zeros(self.estimated_sin_B.size()).to(self.device))
288
+
289
+ self.rec_A_img = self.netG_A.decode(features_A_reconstruction)
290
+
291
+ features_B_reconstruction = self.netG_B.assign_adain(self.content_B, self.style_B_real)
292
+ features_B_reconstruction, _ = self.netDRB(features_B_reconstruction, self.estimated_cos_B, self.estimated_sin_B)
293
+
294
+ self.rec_B_img = self.netG_B.decode(features_B_reconstruction)
295
+
296
+ # Cross domain
297
+ features_A = self.netG_B.assign_adain(self.content_A, self.style_B_fake)
298
+ features_A_real, features_A_physics = self.netDRB(features_A, self.cos_phi, self.sin_phi)
299
+ self.y_M = self.netG_B.decode(features_A_physics)
300
+ self.y = self.netG_B.decode(features_A_real)
301
+
302
+ features_B = self.netG_A.assign_adain(self.content_B, self.style_A_fake)
303
+ features_B_real, _ = self.netDRB(features_B,
304
+ torch.ones(self.cos_phi.size()).to(self.device),
305
+ torch.zeros(self.sin_phi.size()).to(self.device))
306
+ self.fake_A = self.netG_A.decode(features_B_real)
307
+
308
+ self.rec_content_B, self.rec_style_A = self.netG_A.encode(self.fake_A)
309
+ self.rec_content_A, self.rec_style_B = self.netG_B.encode(self.y)
310
+
311
+ if self.opt.lambda_rec_cycle > 0:
312
+ features_A_reconstruction_cycle = self.netG_A.assign_adain(self.rec_content_A, self.style_A_real)
313
+ features_A_reconstruction_cycle, _ = self.netDRB(features_A_reconstruction_cycle,
314
+ torch.ones(self.cos_phi.size()).to(self.device),
315
+ torch.zeros(self.sin_phi.size()).to(self.device))
316
+ self.rec_A_cycle = self.netG_A.decode(features_A_reconstruction_cycle)
317
+
318
+ features_B_reconstruction_cycle = self.netG_B.assign_adain(self.rec_content_B, self.style_B_real)
319
+ features_B_reconstruction_cycle, _ = self.netDRB(features_B_reconstruction_cycle, self.estimated_cos_B, self.estimated_sin_B)
320
+ self.rec_B_cycle = self.netG_B.decode(features_B_reconstruction_cycle)
321
+ if self.opt.lambda_idt > 0:
322
+ features_B_identity = self.netG_B.assign_adain(self.content_A, torch.randn(self.style_B_fake.size()).to(self.device))
323
+ features_B_identity, _ = self.netDRB(features_B_identity,
324
+ torch.ones(self.estimated_cos_B.size()).to(self.device),
325
+ torch.zeros(self.estimated_sin_B.size()).to(self.device))
326
+ self.idt_B_img = self.netG_B.decode(features_B_identity)
327
+
328
+
329
+ if self.opt.lambda_idt > 0:
330
+ self.loss_idt_A = 0
331
+ self.loss_idt_B = self.criterionIdt(self.idt_B_img, self.x) * self.opt.lambda_gan * self.opt.lambda_idt
332
+ else:
333
+ self.loss_idt_A = 0
334
+ self.loss_idt_B = 0
335
+
336
+ continuity_angle_fake = torch.sigmoid(self.netPhi_net_A.forward(self.y).view(self.y_tilde.size(0), -1)).view(self.y_tilde.size(0)) * 2 * math.pi
337
+
338
+ continuity_cos_fake = 1 - ((torch.cos(continuity_angle_fake) + 1) / 2)
339
+ continuity_cos_gt = 1 - ((torch.cos(self.phi) + 1) / 2)
340
+ continuity_sin_fake = 1 - ((torch.sin(continuity_angle_fake) + 1) / 2)
341
+ continuity_sin_gt = 1 - ((torch.sin(self.phi) + 1) / 2)
342
+ distance_cos_fake = (continuity_cos_fake - continuity_cos_gt)
343
+ distance_sin_fake = (continuity_sin_fake - continuity_sin_gt)
344
+
345
+ self.loss_phi_net_A = (distance_cos_fake ** 2) * self.opt.lambda_Phinet_A
346
+ self.loss_phi_net_A += (distance_sin_fake ** 2) * self.opt.lambda_Phinet_A
347
+
348
+ self.loss_rec_A = self.reconCriterion(self.rec_A_img, self.x) * self.opt.lambda_rec_image
349
+ self.loss_rec_B = self.reconCriterion(self.rec_B_img, self.y_tilde) * self.opt.lambda_rec_image
350
+
351
+ self.loss_rec_style_B = self.reconCriterion(self.rec_style_B, self.style_B_fake) * self.opt.lambda_rec_style
352
+ self.loss_rec_style_A = self.reconCriterion(self.rec_style_A, self.style_A_fake) * self.opt.lambda_rec_style
353
+
354
+ self.loss_rec_content_A = self.reconCriterion(self.rec_content_A, self.content_A) * self.opt.lambda_rec_content
355
+ self.loss_rec_content_B = self.reconCriterion(self.rec_content_B, self.content_B) * self.opt.lambda_rec_content
356
+
357
+ if self.opt.lambda_rec_cycle > 0:
358
+ self.loss_cycle_A = self.reconCriterion(self.rec_A_cycle, self.x) * self.opt.lambda_rec_cycle
359
+ self.loss_cycle_B = self.reconCriterion(self.rec_B_cycle, self.y_tilde) * self.opt.lambda_rec_cycle
360
+ else:
361
+ self.loss_cycle_A = 0
362
+
363
+ self.loss_G_A = self.netD_A.calc_gen_loss(self.y) * self.opt.lambda_gan
364
+ self.loss_G_B = self.netD_B.calc_gen_loss(self.fake_A) * self.opt.lambda_gan
365
+
366
+ self.loss_recon_physics = self.opt.lambda_physics * self.criterionPhysics(self.y_M, self.y_M_tilde)
367
+ self.loss_phi_net = self.phi_loss_fn() * self.opt.lambda_compare
368
+
369
+ if self.opt.lambda_vgg > 0:
370
+ self.loss_vgg_A = self.__compute_vgg_loss(self.fake_A, self.y_tilde) * self.opt.lambda_vgg
371
+ self.loss_vgg_B = self.__compute_vgg_loss(self.y, self.x) * self.opt.lambda_vgg
372
+ else:
373
+ self.loss_vgg_A = 0
374
+ self.loss_vgg_B = 0
375
+
376
+ self.loss_G = self.loss_rec_A + self.loss_rec_style_B + self.loss_rec_content_A + \
377
+ self.loss_cycle_A + self.loss_G_B + self.loss_vgg_A + \
378
+ self.loss_rec_B + self.loss_rec_style_A + self.loss_rec_content_B + \
379
+ self.loss_cycle_B + self.loss_G_A + self.loss_vgg_B + \
380
+ self.loss_recon_physics + self.loss_phi_net + self.loss_idt_B + self.loss_phi_net_A
381
+
382
+ return self.loss_G
383
+
384
+ def training_step(self, batch, batch_idx, optimizer_idx):
385
+
386
+ self.set_input(batch)
387
+ if optimizer_idx == 0:
388
+ self.set_requires_grad([self.netD_A, self.netD_B], True)
389
+ self.set_requires_grad([self.netG_A, self.netG_B], False)
390
+
391
+ return self.training_step_D()
392
+ elif optimizer_idx == 1:
393
+ self.set_requires_grad([self.netD_A, self.netD_B], False) # Ds require no gradients when optimizing Gs
394
+ self.set_requires_grad([self.netG_A, self.netG_B], True)
395
+
396
+ return self.training_step_G()
options/__init__.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """This package options includes option modules: training options, test options, and basic options (used in both training and test)."""
2
+
3
+ from argparse import ArgumentParser as AP
4
+ from .train_options import TrainOptions
5
+ from .log_options import LogOptions
6
+ from networks import get_model_options
7
+ from data import get_dataset_options
8
+ import munch
9
+
10
+
11
+ def get_options(cmdline_opt):
12
+
13
+ bo = munch.Munch()
14
+ # Set the number of channels of input image
15
+ # Set the number of channels of output image
16
+ bo.input_nc = 3
17
+ bo.output_nc = 3
18
+ bo.gpu_ids = cmdline_opt.gpus
19
+ # Dataset options
20
+ bo.dataroot = cmdline_opt.path_data
21
+ bo.dataset_mode = cmdline_opt.data_importer
22
+ bo.model = cmdline_opt.model
23
+ # Scheduling policies
24
+ bo.lr = cmdline_opt.learning_rate
25
+ bo.lr_policy = cmdline_opt.scheduler_policy
26
+ bo.decay_iters_step = cmdline_opt.decay_iters_step
27
+ bo.decay_step_gamma = cmdline_opt.decay_step_gamma
28
+
29
+ opts = []
30
+ opts.append(get_model_options(bo.model)())
31
+ opts.append(get_dataset_options(bo.dataset_mode)())
32
+ opts.append(LogOptions())
33
+ opts.append(TrainOptions())
34
+
35
+ # Checks for Nones
36
+ opts = [x for x in opts if x]
37
+ for x in opts:
38
+ bo.update(x)
39
+ return bo
options/log_options.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ import munch
2
+
3
+ def LogOptions():
4
+ lo = munch.Munch()
5
+ # Save images each x iters
6
+ lo.display_freq = 10000
7
+
8
+ # Print info each x iters
9
+ lo.print_freq = 10
10
+ return lo
options/train_options.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import munch
2
+
3
+
4
+ def TrainOptions():
5
+ to = munch.Munch()
6
+ # Iterations
7
+ to.total_iterations = 30000000
8
+
9
+ # Save checkpoint every x iters
10
+ to.save_latest_freq = 35000
11
+
12
+ # Save checkpoint every x epochs
13
+ to.save_epoch_freq = 5
14
+
15
+ # Adam settings
16
+ to.beta1 = 0.5
17
+
18
+ # gan type
19
+ to.gan_mode = 'lsgan'
20
+
21
+ return to
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ torch==1.7.1
2
+ torchvision==0.8.2
3
+ torchaudio==0.7.2
4
+ pytorch-lightning==1.1.8
5
+ torchtext==0.7.0
requirements.yml ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: comogan
2
+ channels:
3
+ - pytorch
4
+ - defaults
5
+ dependencies:
6
+ - _libgcc_mutex=0.1=main
7
+ - blas=1.0=mkl
8
+ - ca-certificates=2021.1.19=h06a4308_1
9
+ - certifi=2020.12.5=py37h06a4308_0
10
+ - cudatoolkit=11.0.221=h6bb024c_0
11
+ - cycler=0.10.0=py37_0
12
+ - dbus=1.13.18=hb2f20db_0
13
+ - expat=2.2.10=he6710b0_2
14
+ - fontconfig=2.13.1=h6c09931_0
15
+ - freetype=2.10.4=h5ab3b9f_0
16
+ - glib=2.63.1=h5a9c865_0
17
+ - gst-plugins-base=1.14.0=hbbd80ab_1
18
+ - gstreamer=1.14.0=hb453b48_1
19
+ - icu=58.2=he6710b0_3
20
+ - intel-openmp=2020.2=254
21
+ - jpeg=9b=h024ee3a_2
22
+ - kiwisolver=1.3.1=py37h2531618_0
23
+ - lcms2=2.11=h396b838_0
24
+ - libedit=3.1.20191231=h14c3975_1
25
+ - libffi=3.2.1=hf484d3e_1007
26
+ - libgcc-ng=9.1.0=hdf63c60_0
27
+ - libpng=1.6.37=hbc83047_0
28
+ - libstdcxx-ng=9.1.0=hdf63c60_0
29
+ - libtiff=4.1.0=h2733197_1
30
+ - libuuid=1.0.3=h1bed415_2
31
+ - libuv=1.40.0=h7b6447c_0
32
+ - libxcb=1.14=h7b6447c_0
33
+ - libxml2=2.9.10=hb55368b_3
34
+ - lz4-c=1.9.3=h2531618_0
35
+ - matplotlib=3.3.4=py37h06a4308_0
36
+ - matplotlib-base=3.3.4=py37h62a2d02_0
37
+ - mkl=2020.2=256
38
+ - mkl-service=2.3.0=py37he8ac12f_0
39
+ - mkl_fft=1.3.0=py37h54f3939_0
40
+ - mkl_random=1.1.1=py37h0573a6f_0
41
+ - ncurses=6.2=he6710b0_1
42
+ - ninja=1.10.2=py37hff7bd54_0
43
+ - olefile=0.46=py37_0
44
+ - openssl=1.1.1j=h27cfd23_0
45
+ - pcre=8.44=he6710b0_0
46
+ - pillow=8.1.1=py37he98fc37_0
47
+ - pip=21.0.1=py37h06a4308_0
48
+ - pyparsing=2.4.7=pyhd3eb1b0_0
49
+ - pyqt=5.9.2=py37h05f1152_2
50
+ - python=3.7.5=h0371630_0
51
+ - python-dateutil=2.8.1=pyhd3eb1b0_0
52
+ - pytorch=1.7.1=py3.7_cuda11.0.221_cudnn8.0.5_0
53
+ - qt=5.9.7=h5867ecd_1
54
+ - readline=7.0=h7b6447c_5
55
+ - setuptools=52.0.0=py37h06a4308_0
56
+ - sip=4.19.8=py37hf484d3e_0
57
+ - six=1.15.0=py37h06a4308_0
58
+ - sqlite=3.33.0=h62c20be_0
59
+ - tk=8.6.10=hbc83047_0
60
+ - torchaudio=0.7.2=py37
61
+ - torchvision=0.8.2=py37_cu110
62
+ - tornado=6.1=py37h27cfd23_0
63
+ - typing_extensions=3.7.4.3=pyha847dfd_0
64
+ - wheel=0.36.2=pyhd3eb1b0_0
65
+ - xz=5.2.5=h7b6447c_0
66
+ - zlib=1.2.11=h7b6447c_3
67
+ - zstd=1.4.5=h9ceee32_0
68
+ - pip:
69
+ - absl-py==0.12.0
70
+ - aiohttp==3.7.4.post0
71
+ - argon2-cffi==20.1.0
72
+ - astunparse==1.6.3
73
+ - async-generator==1.10
74
+ - async-timeout==3.0.1
75
+ - attrs==20.3.0
76
+ - backcall==0.2.0
77
+ - bleach==3.3.0
78
+ - cachetools==4.2.1
79
+ - cffi==1.14.5
80
+ - chardet==4.0.0
81
+ - click==7.1.2
82
+ - coverage==5.5
83
+ - decorator==4.4.2
84
+ - defusedxml==0.7.1
85
+ - dominate==2.6.0
86
+ - entrypoints==0.3
87
+ - fsspec==0.8.7
88
+ - future==0.18.2
89
+ - gast==0.3.3
90
+ - google-auth==1.28.0
91
+ - google-auth-oauthlib==0.4.3
92
+ - google-pasta==0.2.0
93
+ - grpcio==1.36.1
94
+ - h5py==2.10.0
95
+ - human-id==0.1.0.post3
96
+ - idna==2.10
97
+ - importlib-metadata==3.7.3
98
+ - ipykernel==5.5.0
99
+ - ipython==7.21.0
100
+ - ipython-genutils==0.2.0
101
+ - ipywidgets==7.6.3
102
+ - jedi==0.18.0
103
+ - jinja2==2.11.3
104
+ - jsonpatch==1.32
105
+ - jsonpointer==2.1
106
+ - jsonschema==3.2.0
107
+ - jupyter==1.0.0
108
+ - jupyter-client==6.1.12
109
+ - jupyter-console==6.4.0
110
+ - jupyter-core==4.7.1
111
+ - jupyterlab-pygments==0.1.2
112
+ - jupyterlab-widgets==1.0.0
113
+ - keras-preprocessing==1.1.2
114
+ - markdown==3.3.4
115
+ - markupsafe==1.1.1
116
+ - mistune==0.8.4
117
+ - multidict==5.1.0
118
+ - munch==2.5.0
119
+ - nbclient==0.5.3
120
+ - nbconvert==6.0.7
121
+ - nbformat==5.1.2
122
+ - nest-asyncio==1.5.1
123
+ - notebook==6.3.0
124
+ - numpy==1.18.5
125
+ - oauthlib==3.1.0
126
+ - opt-einsum==3.3.0
127
+ - packaging==20.9
128
+ - pandocfilters==1.4.3
129
+ - parso==0.8.1
130
+ - pexpect==4.8.0
131
+ - pickleshare==0.7.5
132
+ - prometheus-client==0.9.0
133
+ - prompt-toolkit==3.0.18
134
+ - protobuf==3.15.6
135
+ - ptyprocess==0.7.0
136
+ - pyasn1==0.4.8
137
+ - pyasn1-modules==0.2.8
138
+ - pycparser==2.20
139
+ - pygments==2.8.1
140
+ - pyrsistent==0.17.3
141
+ - pytorch-lightning==1.1.8
142
+ - pyyaml==5.3.1
143
+ - pyzmq==22.0.3
144
+ - qtconsole==5.0.3
145
+ - qtpy==1.9.0
146
+ - requests==2.25.1
147
+ - requests-oauthlib==1.3.0
148
+ - rsa==4.7.2
149
+ - scipy==1.4.1
150
+ - send2trash==1.5.0
151
+ - tensorboard==2.3.0
152
+ - tensorboard-plugin-wit==1.8.0
153
+ - tensorflow==2.3.0
154
+ - tensorflow-estimator==2.3.0
155
+ - tensorflow-gpu==2.3.0
156
+ - termcolor==1.1.0
157
+ - terminado==0.9.3
158
+ - testpath==0.4.4
159
+ - torchfile==0.1.0
160
+ - tqdm==4.59.0
161
+ - traitlets==5.0.5
162
+ - urllib3==1.26.4
163
+ - visdom==0.1.8.9
164
+ - waymo-open-dataset-tf-2-3-0==1.3.0
165
+ - wcwidth==0.2.5
166
+ - webencodings==0.5.1
167
+ - websocket-client==0.58.0
168
+ - werkzeug==1.0.1
169
+ - widgetsnbextension==3.5.1
170
+ - wrapt==1.12.1
171
+ - yarl==1.6.3
172
+ - zipp==3.4.1
res/vgg_imagenet.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:364cfae76a51a908502d7b285bf047ce54da423e75fcdc8505312b00c5105c9e
3
+ size 58862394
scripts/dump_waymo.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import math
4
+ import itertools
5
+ import numpy as np
6
+ import tensorflow as tf
7
+
8
+ from PIL import Image
9
+ from argparse import ArgumentParser as AP
10
+ from waymo_open_dataset.utils import range_image_utils
11
+ from waymo_open_dataset.utils import transform_utils
12
+ from waymo_open_dataset.utils import frame_utils
13
+ from waymo_open_dataset import dataset_pb2 as open_dataset
14
+
15
+ def printProgressBar(i, max, postText):
16
+ n_bar = 20 #size of progress bar
17
+ j= i/max
18
+ sys.stdout.write('\r')
19
+ sys.stdout.write(f"[{'=' * int(n_bar * j):{n_bar}s}] {int(100 * j)}% {postText}")
20
+ sys.stdout.flush()
21
+
22
+
23
+ def main(cmdline_opt):
24
+ DS_PATH = cmdline_opt.load_path
25
+ files = os.listdir(DS_PATH)
26
+ files = [os.path.join(DS_PATH,x) for x in files]
27
+
28
+ with open('sunny_sequences.txt') as file:
29
+ sunny_sequences = file.read().splitlines()
30
+
31
+ for index_file, file in enumerate(files):
32
+ if not os.path.basename(file).split('_with_camera_labels.tfrecord')[0] in sunny_sequences: # Some sequences are wrongly annotated as sunny. We annotated a subset of really sunny images.
33
+ continue
34
+ dataset = tf.data.TFRecordDataset(file, compression_type='')
35
+ printProgressBar(index_file, len(files), "Files done")
36
+
37
+ for index_data, data in enumerate(dataset):
38
+ frame = open_dataset.Frame()
39
+ frame.ParseFromString(bytearray(data.numpy()))
40
+
41
+ if frame.context.stats.weather == 'sunny':
42
+ (range_images, camera_projections, range_image_top_pose) = frame_utils.parse_range_image_and_camera_projection(frame)
43
+
44
+ for label in frame.camera_labels:
45
+ if label.name == open_dataset.CameraName.FRONT:
46
+ path = os.path.join(cmdline_opt.save_path,
47
+ frame.context.stats.weather,
48
+ frame.context.stats.time_of_day,
49
+ '{}-{:06}.png'.format(os.path.basename(file), index_data))
50
+
51
+ im = tf.image.decode_png(frame.images[0].image)
52
+ pil_im = Image.fromarray(im.numpy())
53
+ res_img = pil_im.resize((480, 320), Image.BILINEAR)
54
+ os.makedirs(os.path.dirname(path), exist_ok=True)
55
+ res_img.save(path)
56
+ else:
57
+ break
58
+
59
+ if __name__ == '__main__':
60
+ ap = AP()
61
+ ap.add_argument('--load_path', default='/datasets_master/waymo_open_dataset_v_1_2_0/validation', type=str, help='Set a path to load the Waymo dataset')
62
+ ap.add_argument('--save_path', default='/datasets_local/datasets_fpizzati/waymo_480x320/val', type=str, help='Set a path to save the dataset')
63
+ main(ap.parse_args())
scripts/sunny_sequences.txt ADDED
@@ -0,0 +1,850 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ segment-11486225968269855324_92_000_112_000
2
+ segment-11566385337103696871_5740_000_5760_000
3
+ segment-7000927478052605119_1052_330_1072_330
4
+ segment-2975249314261309142_6540_000_6560_000
5
+ segment-8031709558315183746_491_220_511_220
6
+ segment-10723911392655396041_860_000_880_000
7
+ segment-1022527355599519580_4866_960_4886_960
8
+ segment-15644354861949427452_3645_350_3665_350
9
+ segment-16801666784196221098_2480_000_2500_000
10
+ segment-11967272535264406807_580_000_600_000
11
+ segment-4266984864799709257_720_000_740_000
12
+ segment-15445436653637630344_3957_561_3977_561
13
+ segment-13182548552824592684_4160_250_4180_250
14
+ segment-10498013744573185290_1240_000_1260_000
15
+ segment-12551320916264703416_1420_000_1440_000
16
+ segment-2036908808378190283_4340_000_4360_000
17
+ segment-10750135302241325253_180_000_200_000
18
+ segment-15696964848687303249_4615_200_4635_200
19
+ segment-4167304237516228486_5720_000_5740_000
20
+ segment-14810689888487451189_720_000_740_000
21
+ segment-8158128948493708501_7477_230_7497_230
22
+ segment-1382515516588059826_780_000_800_000
23
+ segment-7768517933263896280_1120_000_1140_000
24
+ segment-16345319168590318167_1420_000_1440_000
25
+ segment-8663006751916427679_1520_000_1540_000
26
+ segment-15795616688853411272_1245_000_1265_000
27
+ segment-454855130179746819_4580_000_4600_000
28
+ segment-12174529769287588121_3848_440_3868_440
29
+ segment-5446766520699850364_157_000_177_000
30
+ segment-11183906854663518829_2294_000_2314_000
31
+ segment-13238419657658219864_4630_850_4650_850
32
+ segment-3154510051521049916_7000_000_7020_000
33
+ segment-7727809428114700355_2960_000_2980_000
34
+ segment-9016865488168499365_4780_000_4800_000
35
+ segment-10588771936253546636_2300_000_2320_000
36
+ segment-16977844994272847523_2140_000_2160_000
37
+ segment-4447423683538547117_536_022_556_022
38
+ segment-14777753086917826209_4147_000_4167_000
39
+ segment-15550613280008674010_1780_000_1800_000
40
+ segment-11070802577416161387_740_000_760_000
41
+ segment-16534202648288984983_900_000_920_000
42
+ segment-15448466074775525292_2920_000_2940_000
43
+ segment-17647858901077503501_1500_000_1520_000
44
+ segment-15717839202171538526_1124_920_1144_920
45
+ segment-16093022852977039323_2981_100_3001_100
46
+ segment-12681651284932598380_3585_280_3605_280
47
+ segment-15868625208244306149_4340_000_4360_000
48
+ segment-10625026498155904401_200_000_220_000
49
+ segment-7189996641300362130_3360_000_3380_000
50
+ segment-11623618970700582562_2840_367_2860_367
51
+ segment-2684088316387726629_180_000_200_000
52
+ segment-14964131310266936779_3292_850_3312_850
53
+ segment-5349843997395815699_1040_000_1060_000
54
+ segment-3988957004231180266_5566_500_5586_500
55
+ segment-16034875274658204340_240_000_260_000
56
+ segment-6280779486809627179_760_000_780_000
57
+ segment-10094743350625019937_3420_000_3440_000
58
+ segment-5214491533551928383_1918_780_1938_780
59
+ segment-2570264768774616538_860_000_880_000
60
+ segment-5459113827443493510_380_000_400_000
61
+ segment-14424804287031718399_1281_030_1301_030
62
+ segment-5222336716599194110_8940_000_8960_000
63
+ segment-4414235478445376689_2020_000_2040_000
64
+ segment-2206505463279484253_476_189_496_189
65
+ segment-5458962501360340931_3140_000_3160_000
66
+ segment-574762194520856849_1660_000_1680_000
67
+ segment-9465500459680839281_1100_000_1120_000
68
+ segment-9907794657177651763_1126_570_1146_570
69
+ segment-15803855782190483017_1060_000_1080_000
70
+ segment-17750787536486427868_560_000_580_000
71
+ segment-14763701469114129880_2260_000_2280_000
72
+ segment-14369250836076988112_7249_040_7269_040
73
+ segment-80599353855279550_2604_480_2624_480
74
+ segment-4487677815262010875_4940_000_4960_000
75
+ segment-7999729608823422351_1483_600_1503_600
76
+ segment-16608525782988721413_100_000_120_000
77
+ segment-15458436361042752328_3549_030_3569_030
78
+ segment-2711351338963414257_1360_000_1380_000
79
+ segment-3132641021038352938_1937_160_1957_160
80
+ segment-15903544160717261009_3961_870_3981_870
81
+ segment-14348136031422182645_3360_000_3380_000
82
+ segment-17958696356648515477_1660_000_1680_000
83
+ segment-4114454788208078028_660_000_680_000
84
+ segment-2598465433001774398_740_670_760_670
85
+ segment-10676267326664322837_311_180_331_180
86
+ segment-8811210064692949185_3066_770_3086_770
87
+ segment-12365808668068790137_2920_000_2940_000
88
+ segment-2508530288521370100_3385_660_3405_660
89
+ segment-4747171543583769736_425_544_445_544
90
+ segment-5835049423600303130_180_000_200_000
91
+ segment-2259324582958830057_3767_030_3787_030
92
+ segment-16191439239940794174_2245_000_2265_000
93
+ segment-13363977648531075793_343_000_363_000
94
+ segment-4672649953433758614_2700_000_2720_000
95
+ segment-3060057659029579482_420_000_440_000
96
+ segment-1172406780360799916_1660_000_1680_000
97
+ segment-6456165750159303330_1770_080_1790_080
98
+ segment-12257951615341726923_2196_690_2216_690
99
+ segment-10275144660749673822_5755_561_5775_561
100
+ segment-3437741670889149170_1411_550_1431_550
101
+ segment-17159836069183024120_640_000_660_000
102
+ segment-15834329472172048691_2956_760_2976_760
103
+ segment-1051897962568538022_238_170_258_170
104
+ segment-5602237689147924753_760_000_780_000
105
+ segment-11199484219241918646_2810_030_2830_030
106
+ segment-4781039348168995891_280_000_300_000
107
+ segment-16042842363202855955_265_000_285_000
108
+ segment-7447927974619745860_820_000_840_000
109
+ segment-7019385869759035132_4270_850_4290_850
110
+ segment-13085453465864374565_2040_000_2060_000
111
+ segment-16042886962142359737_1060_000_1080_000
112
+ segment-11318901554551149504_520_000_540_000
113
+ segment-915935412356143375_1740_030_1760_030
114
+ segment-9747453753779078631_940_000_960_000
115
+ segment-14824622621331930560_2395_420_2415_420
116
+ segment-18096167044602516316_2360_000_2380_000
117
+ segment-2547899409721197155_1380_000_1400_000
118
+ segment-12581809607914381746_1219_547_1239_547
119
+ segment-11379226583756500423_6230_810_6250_810
120
+ segment-5100136784230856773_2517_300_2537_300
121
+ segment-13402473631986525162_5700_000_5720_000
122
+ segment-5127440443725457056_2921_340_2941_340
123
+ segment-14561791273891593514_2558_030_2578_030
124
+ segment-2618605158242502527_1860_000_1880_000
125
+ segment-1357883579772440606_2365_000_2385_000
126
+ segment-9015546800913584551_4431_180_4451_180
127
+ segment-17885096890374683162_755_580_775_580
128
+ segment-14388269713149187289_1994_280_2014_280
129
+ segment-1994338527906508494_3438_100_3458_100
130
+ segment-8323028393459455521_2105_000_2125_000
131
+ segment-10327752107000040525_1120_000_1140_000
132
+ segment-13258835835415292197_965_000_985_000
133
+ segment-16102220208346880_1420_000_1440_000
134
+ segment-11454085070345530663_1905_000_1925_000
135
+ segment-15270638100874320175_2720_000_2740_000
136
+ segment-17388121177218499911_2520_000_2540_000
137
+ segment-7761658966964621355_1000_000_1020_000
138
+ segment-514687114615102902_6240_000_6260_000
139
+ segment-17850487901509155700_9065_000_9085_000
140
+ segment-16202688197024602345_3818_820_3838_820
141
+ segment-18331713844982117868_2920_900_2940_900
142
+ segment-6417523992887712896_1180_000_1200_000
143
+ segment-10770759614217273359_1465_000_1485_000
144
+ segment-12858738411692807959_2865_000_2885_000
145
+ segment-3068522656378006650_540_000_560_000
146
+ segment-8207498713503609786_3005_450_3025_450
147
+ segment-10072231702153043603_5725_000_5745_000
148
+ segment-7934693355186591404_73_000_93_000
149
+ segment-57132587708734824_1020_000_1040_000
150
+ segment-15365821471737026848_1160_000_1180_000
151
+ segment-10964956617027590844_1584_680_1604_680
152
+ segment-1972128316147758939_2500_000_2520_000
153
+ segment-8659567063494726263_2480_000_2500_000
154
+ segment-9529958888589376527_640_000_660_000
155
+ segment-14818835630668820137_1780_000_1800_000
156
+ segment-3224923476345749285_4480_000_4500_000
157
+ segment-14791260641858988448_1018_000_1038_000
158
+ segment-10786629299947667143_3440_000_3460_000
159
+ segment-3270384983482134275_3220_000_3240_000
160
+ segment-7239123081683545077_4044_370_4064_370
161
+ segment-3195159706851203049_2763_790_2783_790
162
+ segment-8123909110537564436_7220_000_7240_000
163
+ segment-17778522338768131809_5920_000_5940_000
164
+ segment-15202102284304593700_1900_000_1920_000
165
+ segment-13177337129001451839_9160_000_9180_000
166
+ segment-7324192826315818756_620_000_640_000
167
+ segment-4971817041565280127_780_500_800_500
168
+ segment-3220249619779692045_505_000_525_000
169
+ segment-9521653920958139982_940_000_960_000
170
+ segment-1146261869236413282_1680_000_1700_000
171
+ segment-11918003324473417938_1400_000_1420_000
172
+ segment-10061305430875486848_1080_000_1100_000
173
+ segment-15349503153813328111_2160_000_2180_000
174
+ segment-17790754307864212354_1520_000_1540_000
175
+ segment-17759280403078053118_6060_580_6080_580
176
+ segment-4575961016807404107_880_000_900_000
177
+ segment-10455472356147194054_1560_000_1580_000
178
+ segment-6904827860701329567_960_000_980_000
179
+ segment-6229371035421550389_2220_000_2240_000
180
+ segment-6390847454531723238_6000_000_6020_000
181
+ segment-4537254579383578009_3820_000_3840_000
182
+ segment-5144634012371033641_920_000_940_000
183
+ segment-9547911055204230158_1567_950_1587_950
184
+ segment-1737018592744049492_1960_000_1980_000
185
+ segment-6193696614129429757_2420_000_2440_000
186
+ segment-550171902340535682_2640_000_2660_000
187
+ segment-744006317457557752_2080_000_2100_000
188
+ segment-4655005625668154134_560_000_580_000
189
+ segment-3591015878717398163_1381_280_1401_280
190
+ segment-3441838785578020259_1300_000_1320_000
191
+ segment-10235335145367115211_5420_000_5440_000
192
+ segment-12321865437129862911_3480_000_3500_000
193
+ segment-1918764220984209654_5680_000_5700_000
194
+ segment-13840133134545942567_1060_000_1080_000
195
+ segment-7890808800227629086_6162_700_6182_700
196
+ segment-2656110181316327570_940_000_960_000
197
+ segment-990914685337955114_980_000_1000_000
198
+ segment-10940952441434390507_1888_710_1908_710
199
+ segment-473735159277431842_630_095_650_095
200
+ segment-3543045673995761051_460_000_480_000
201
+ segment-11126313430116606120_1439_990_1459_990
202
+ segment-7290499689576448085_3960_000_3980_000
203
+ segment-18311996733670569136_5880_000_5900_000
204
+ segment-4916527289027259239_5180_000_5200_000
205
+ segment-17552108427312284959_3200_000_3220_000
206
+ segment-5200186706748209867_80_000_100_000
207
+ segment-1265122081809781363_2879_530_2899_530
208
+ segment-6303332643743862144_5600_000_5620_000
209
+ segment-3112630089558008159_7280_000_7300_000
210
+ segment-2088865281951278665_4460_000_4480_000
211
+ segment-13310437789759009684_2645_000_2665_000
212
+ segment-13944915979337652825_4260_668_4280_668
213
+ segment-10526338824408452410_5714_660_5734_660
214
+ segment-9062286840846668802_31_000_51_000
215
+ segment-16911037681440249335_700_000_720_000
216
+ segment-4960194482476803293_4575_960_4595_960
217
+ segment-16336545122307923741_486_637_506_637
218
+ segment-2101027554826767753_2504_580_2524_580
219
+ segment-2922309829144504838_1840_000_1860_000
220
+ segment-809159138284604331_3355_840_3375_840
221
+ segment-15646511153936256674_1620_000_1640_000
222
+ segment-5870668058140631588_1180_000_1200_000
223
+ segment-3872781118550194423_3654_670_3674_670
224
+ segment-7996500550445322129_2333_304_2353_304
225
+ segment-16625429321676352815_1543_860_1563_860
226
+ segment-16208935658045135756_4412_730_4432_730
227
+ segment-6207195415812436731_805_000_825_000
228
+ segment-15482064737890453610_5180_000_5200_000
229
+ segment-11674150664140226235_680_000_700_000
230
+ segment-4604173119409817302_2820_000_2840_000
231
+ segment-9653249092275997647_980_000_1000_000
232
+ segment-11928449532664718059_1200_000_1220_000
233
+ segment-9758342966297863572_875_230_895_230
234
+ segment-6037403592521973757_3260_000_3280_000
235
+ segment-4114548607314119333_2780_000_2800_000
236
+ segment-12894036666871194216_787_000_807_000
237
+ segment-7741361323303179462_1230_310_1250_310
238
+ segment-10517728057304349900_3360_000_3380_000
239
+ segment-14619874262915043759_2801_090_2821_090
240
+ segment-3490810581309970603_11125_000_11145_000
241
+ segment-16485056021060230344_1576_741_1596_741
242
+ segment-9415086857375798767_4760_000_4780_000
243
+ segment-11119453952284076633_1369_940_1389_940
244
+ segment-2752216004511723012_260_000_280_000
245
+ segment-6625150143263637936_780_000_800_000
246
+ segment-169115044301335945_480_000_500_000
247
+ segment-12251442326766052580_1840_000_1860_000
248
+ segment-15844593126368860820_3260_000_3280_000
249
+ segment-16600468011801266684_1500_000_1520_000
250
+ segment-10023947602400723454_1120_000_1140_000
251
+ segment-1773696223367475365_1060_000_1080_000
252
+ segment-1887497421568128425_94_000_114_000
253
+ segment-18441113814326864765_725_000_745_000
254
+ segment-10444454289801298640_4360_000_4380_000
255
+ segment-2791302832590946720_1900_000_1920_000
256
+ segment-10963653239323173269_1924_000_1944_000
257
+ segment-11839652018869852123_2565_000_2585_000
258
+ segment-14004546003548947884_2331_861_2351_861
259
+ segment-1306458236359471795_2524_330_2544_330
260
+ segment-17386718718413812426_1763_140_1783_140
261
+ segment-8424573439186068308_3460_000_3480_000
262
+ segment-9058545212382992974_5236_200_5256_200
263
+ segment-5576800480528461086_1000_000_1020_000
264
+ segment-13679757109245957439_4167_170_4187_170
265
+ segment-16403578704435467513_5133_870_5153_870
266
+ segment-4164064449185492261_400_000_420_000
267
+ segment-3461811179177118163_1161_000_1181_000
268
+ segment-2415873247906962761_5460_000_5480_000
269
+ segment-13965460994524880649_2842_050_2862_050
270
+ segment-18397511418934954408_620_000_640_000
271
+ segment-11925224148023145510_1040_000_1060_000
272
+ segment-12027892938363296829_4086_280_4106_280
273
+ segment-33101359476901423_6720_910_6740_910
274
+ segment-12879640240483815315_5852_605_5872_605
275
+ segment-17818548625922145895_1372_430_1392_430
276
+ segment-4324227028219935045_1520_000_1540_000
277
+ segment-5083516879091912247_3600_000_3620_000
278
+ segment-1758724094753801109_1251_037_1271_037
279
+ segment-1730266523558914470_305_260_325_260
280
+ segment-5072733804607719382_5807_570_5827_570
281
+ segment-1231623110026745648_480_000_500_000
282
+ segment-4348478035380346090_1000_000_1020_000
283
+ segment-912496333665446669_1680_000_1700_000
284
+ segment-1940032764689855266_3690_210_3710_210
285
+ segment-7940496892864900543_4783_540_4803_540
286
+ segment-4723255145958809564_741_350_761_350
287
+ segment-17144150788361379549_2720_000_2740_000
288
+ segment-2739239662326039445_5890_320_5910_320
289
+ segment-14705303724557273004_3105_000_3125_000
290
+ segment-8722413665055769182_2840_000_2860_000
291
+ segment-16793466851577046940_2800_000_2820_000
292
+ segment-15331851695963211598_1620_000_1640_000
293
+ segment-3002379261592154728_2256_691_2276_691
294
+ segment-1208303279778032257_1360_000_1380_000
295
+ segment-3584210979358667442_2880_000_2900_000
296
+ segment-10500357041547037089_1474_800_1494_800
297
+ segment-2265177645248606981_2340_000_2360_000
298
+ segment-13619063687271391084_1519_680_1539_680
299
+ segment-7313718849795510302_280_000_300_000
300
+ segment-14076089808269682731_54_730_74_730
301
+ segment-11846396154240966170_3540_000_3560_000
302
+ segment-17330200445788773877_2700_000_2720_000
303
+ segment-4784689467343773295_1700_000_1720_000
304
+ segment-6142170920525844857_2080_000_2100_000
305
+ segment-14869732972903148657_2420_000_2440_000
306
+ segment-9105380625923157726_4420_000_4440_000
307
+ segment-11847506886204460250_1640_000_1660_000
308
+ segment-4292360793125812833_3080_000_3100_000
309
+ segment-6128311556082453976_2520_000_2540_000
310
+ segment-1255991971750044803_1700_000_1720_000
311
+ segment-6694593639447385226_1040_000_1060_000
312
+ segment-11718898130355901268_2300_000_2320_000
313
+ segment-11343624116265195592_5910_530_5930_530
314
+ segment-9123867659877264673_3569_950_3589_950
315
+ segment-4641822195449131669_380_000_400_000
316
+ segment-3364861183015885008_1720_000_1740_000
317
+ segment-5614471637960666943_6955_675_6975_675
318
+ segment-3966447614090524826_320_000_340_000
319
+ segment-12161824480686739258_1813_380_1833_380
320
+ segment-7120839653809570957_1060_000_1080_000
321
+ segment-4392459808686681511_5006_200_5026_200
322
+ segment-1422926405879888210_51_310_71_310
323
+ segment-7670103006580549715_360_000_380_000
324
+ segment-10072140764565668044_4060_000_4080_000
325
+ segment-16262849101474060261_3459_585_3479_585
326
+ segment-14964691552976940738_2219_229_2239_229
327
+ segment-1988987616835805847_3500_000_3520_000
328
+ segment-786582060300383668_2944_060_2964_060
329
+ segment-10734565072045778791_440_000_460_000
330
+ segment-17993467596234560701_4940_000_4960_000
331
+ segment-11004685739714500220_2300_000_2320_000
332
+ segment-9509506420470671704_4049_100_4069_100
333
+ segment-10485926982439064520_4980_000_5000_000
334
+ segment-1352150727715827110_3710_250_3730_250
335
+ segment-4457475194088194008_3100_000_3120_000
336
+ segment-10664823084372323928_4360_000_4380_000
337
+ segment-6813611334239274394_535_000_555_000
338
+ segment-5718418936283106890_1200_000_1220_000
339
+ segment-2114574223307001959_1163_280_1183_280
340
+ segment-15125792363972595336_4960_000_4980_000
341
+ segment-14183710428479823719_3140_000_3160_000
342
+ segment-8603916601243187272_540_000_560_000
343
+ segment-12896629105712361308_4520_000_4540_000
344
+ segment-4323857429732097807_1005_000_1025_000
345
+ segment-2935377810101940676_300_000_320_000
346
+ segment-14143054494855609923_4529_100_4549_100
347
+ segment-2151482270865536784_900_000_920_000
348
+ segment-17987556068410436875_520_610_540_610
349
+ segment-3425716115468765803_977_756_997_756
350
+ segment-17066133495361694802_1220_000_1240_000
351
+ segment-6792191642931213648_1522_000_1542_000
352
+ segment-15578655130939579324_620_000_640_000
353
+ segment-5268267801500934740_2160_000_2180_000
354
+ segment-6771783338734577946_6105_840_6125_840
355
+ segment-10231929575853664160_1160_000_1180_000
356
+ segment-11489533038039664633_4820_000_4840_000
357
+ segment-18380281348728758158_4820_000_4840_000
358
+ segment-9288629315134424745_4360_000_4380_000
359
+ segment-15374821596407640257_3388_480_3408_480
360
+ segment-18111897798871103675_320_000_340_000
361
+ segment-9110125340505914899_380_000_400_000
362
+ segment-7458568461947999548_700_000_720_000
363
+ segment-54293441958058219_2335_200_2355_200
364
+ segment-11060291335850384275_3761_210_3781_210
365
+ segment-12339284075576056695_1920_000_1940_000
366
+ segment-17160696560226550358_6229_820_6249_820
367
+ segment-3031519073799366723_1140_000_1160_000
368
+ segment-13271285919570645382_5320_000_5340_000
369
+ segment-8763126149209091146_1843_320_1863_320
370
+ segment-3156155872654629090_2474_780_2494_780
371
+ segment-9350921499281634194_2403_251_2423_251
372
+ segment-16578409328451172992_3780_000_3800_000
373
+ segment-11940460932056521663_1760_000_1780_000
374
+ segment-14830022845193837364_3488_060_3508_060
375
+ segment-17270469718624587995_5202_030_5222_030
376
+ segment-17959337482465423746_2840_000_2860_000
377
+ segment-16651261238721788858_2365_000_2385_000
378
+ segment-4458730539804900192_535_000_555_000
379
+ segment-15832924468527961_1564_160_1584_160
380
+ segment-11392401368700458296_1086_429_1106_429
381
+ segment-2336233899565126347_1180_000_1200_000
382
+ segment-16873108320324977627_780_000_800_000
383
+ segment-13622747960068272448_1678_930_1698_930
384
+ segment-17216329305659006368_4800_000_4820_000
385
+ segment-7950869827763684964_8685_000_8705_000
386
+ segment-3657581213864582252_340_000_360_000
387
+ segment-2899357195020129288_3723_163_3743_163
388
+ segment-9179922063516210200_157_000_177_000
389
+ segment-16087604685956889409_40_000_60_000
390
+ segment-17342274091983078806_80_000_100_000
391
+ segment-6742105013468660925_3645_000_3665_000
392
+ segment-1999080374382764042_7094_100_7114_100
393
+ segment-8582923946352460474_2360_000_2380_000
394
+ segment-15379350264706417068_3120_000_3140_000
395
+ segment-204421859195625800_1080_000_1100_000
396
+ segment-6242822583398487496_73_000_93_000
397
+ segment-6350707596465488265_2393_900_2413_900
398
+ segment-16121633832852116614_240_000_260_000
399
+ segment-13731697468004921673_4920_000_4940_000
400
+ segment-12505030131868863688_1740_000_1760_000
401
+ segment-12337317986514501583_5346_260_5366_260
402
+ segment-6234738900256277070_320_000_340_000
403
+ segment-12566399510596872945_2078_320_2098_320
404
+ segment-7921369793217703814_1060_000_1080_000
405
+ segment-3418007171190630157_3585_530_3605_530
406
+ segment-15053781258223091665_3192_117_3212_117
407
+ segment-15787777881771177481_8820_000_8840_000
408
+ segment-13862220583747475906_1260_000_1280_000
409
+ segment-9311322119128915594_5285_000_5305_000
410
+ segment-6606076833441976341_1340_000_1360_000
411
+ segment-7331965392247645851_1005_940_1025_940
412
+ segment-11113047206980595400_2560_000_2580_000
413
+ segment-15535062863944567958_1100_000_1120_000
414
+ segment-2692887320656885771_2480_000_2500_000
415
+ segment-8099457465580871094_4764_380_4784_380
416
+ segment-13033853066564892960_1040_000_1060_000
417
+ segment-9385013624094020582_2547_650_2567_650
418
+ segment-3504776317009340435_6920_000_6940_000
419
+ segment-2660301763960988190_3742_580_3762_580
420
+ segment-13186511704021307558_2000_000_2020_000
421
+ segment-16435050660165962165_3635_310_3655_310
422
+ segment-1939881723297238689_6848_040_6868_040
423
+ segment-11799592541704458019_9828_750_9848_750
424
+ segment-18403940760739364047_920_000_940_000
425
+ segment-2555987917096562599_1620_000_1640_000
426
+ segment-4337887720320812223_1857_930_1877_930
427
+ segment-9568394837328971633_466_365_486_365
428
+ segment-3635081602482786801_900_000_920_000
429
+ segment-17761959194352517553_5448_420_5468_420
430
+ segment-8938046348067069210_3800_000_3820_000
431
+ segment-5592790652933523081_667_770_687_770
432
+ segment-1191788760630624072_3880_000_3900_000
433
+ segment-12979718722917614085_1039_490_1059_490
434
+ segment-17547795428359040137_5056_070_5076_070
435
+ segment-16331619444570993520_1020_000_1040_000
436
+ segment-16646502593577530501_4878_080_4898_080
437
+ segment-6410495600874495447_5287_500_5307_500
438
+ segment-6177474146670383260_4200_000_4220_000
439
+ segment-13823509240483976870_1514_190_1534_190
440
+ segment-8120716761799622510_862_120_882_120
441
+ segment-5215905243049326497_20_000_40_000
442
+ segment-7543690094688232666_4945_350_4965_350
443
+ segment-13196796799137805454_3036_940_3056_940
444
+ segment-2961247865039433386_920_000_940_000
445
+ segment-17674974223808194792_8787_692_8807_692
446
+ segment-5526948896847934178_1039_000_1059_000
447
+ segment-4931036732523207946_10755_600_10775_600
448
+ segment-7089765864827567005_1020_000_1040_000
449
+ segment-5005815668926224220_2194_330_2214_330
450
+ segment-15266427834976906738_1620_000_1640_000
451
+ segment-6104545334635651714_2780_000_2800_000
452
+ segment-4546515828974914709_922_040_942_040
453
+ segment-9985243312780923024_3049_720_3069_720
454
+ segment-7566697458525030390_1440_000_1460_000
455
+ segment-10923963890428322967_1445_000_1465_000
456
+ segment-6638427309837298695_220_000_240_000
457
+ segment-17941839888833418904_1240_000_1260_000
458
+ segment-17407069523496279950_4354_900_4374_900
459
+ segment-7517545172000568481_2325_000_2345_000
460
+ segment-10596949720463106554_1933_530_1953_530
461
+ segment-15628918650068847391_8077_670_8097_670
462
+ segment-5846229052615948000_2120_000_2140_000
463
+ segment-18286677872269962604_3520_000_3540_000
464
+ segment-16951470340360921766_2840_000_2860_000
465
+ segment-1863454917318776530_1040_000_1060_000
466
+ segment-14098605172844003779_5084_630_5104_630
467
+ segment-8327447186504415549_5200_000_5220_000
468
+ segment-11388947676680954806_5427_320_5447_320
469
+ segment-12179768245749640056_5561_070_5581_070
470
+ segment-2590213596097851051_460_000_480_000
471
+ segment-1442753028323350651_4065_000_4085_000
472
+ segment-8399876466981146110_2560_000_2580_000
473
+ segment-16646360389507147817_3320_000_3340_000
474
+ segment-10793018113277660068_2714_540_2734_540
475
+ segment-14018515129165961775_483_260_503_260
476
+ segment-2863984611797967753_3200_000_3220_000
477
+ segment-13909033332341079321_4007_930_4027_930
478
+ segment-13390791323468600062_6718_570_6738_570
479
+ segment-13476374534576730229_240_000_260_000
480
+ segment-18141076662151909970_2755_710_2775_710
481
+ segment-7101099554331311287_5320_000_5340_000
482
+ segment-2224716024428969146_1420_000_1440_000
483
+ segment-8700094808505895018_7272_488_7292_488
484
+ segment-5731414711882954246_1990_250_2010_250
485
+ segment-4733704239941053266_960_000_980_000
486
+ segment-634378055350569306_280_000_300_000
487
+ segment-6172160122069514875_6866_560_6886_560
488
+ segment-15942468615931009553_1243_190_1263_190
489
+ segment-16224018017168210482_6353_500_6373_500
490
+ segment-18025338595059503802_571_216_591_216
491
+ segment-7885161619764516373_289_280_309_280
492
+ segment-9820553434532681355_2820_000_2840_000
493
+ segment-207754730878135627_1140_000_1160_000
494
+ segment-15882343134097151256_4820_000_4840_000
495
+ segment-9325580606626376787_4509_140_4529_140
496
+ segment-13254498462985394788_980_000_1000_000
497
+ segment-16676683078119047936_300_000_320_000
498
+ segment-3911646355261329044_580_000_600_000
499
+ segment-6740694556948402155_3040_000_3060_000
500
+ segment-14233522945839943589_100_000_120_000
501
+ segment-4017824591066644473_3000_000_3020_000
502
+ segment-14250544550818363063_880_000_900_000
503
+ segment-8822503619482926605_1080_000_1100_000
504
+ segment-6814918034011049245_134_170_154_170
505
+ segment-10391312872392849784_4099_400_4119_400
506
+ segment-4468278022208380281_455_820_475_820
507
+ segment-1005081002024129653_5313_150_5333_150
508
+ segment-2577669988012459365_1640_000_1660_000
509
+ segment-13506499849906169066_120_000_140_000
510
+ segment-8148053503558757176_4240_000_4260_000
511
+ segment-12273083120751993429_7285_000_7305_000
512
+ segment-9250355398701464051_4166_132_4186_132
513
+ segment-15533468984793020049_800_000_820_000
514
+ segment-5451442719480728410_5660_000_5680_000
515
+ segment-12511696717465549299_4209_630_4229_630
516
+ segment-1926967104529174124_5214_780_5234_780
517
+ segment-15903184480576180688_3160_000_3180_000
518
+ segment-15265053588821562107_60_000_80_000
519
+ segment-6378340771722906187_1120_000_1140_000
520
+ segment-7861168750216313148_1305_290_1325_290
521
+ segment-13142190313715360621_3888_090_3908_090
522
+ segment-972142630887801133_642_740_662_740
523
+ segment-13145971249179441231_1640_000_1660_000
524
+ segment-4702302448560822815_927_380_947_380
525
+ segment-3461228720457810721_4511_120_4531_120
526
+ segment-4058410353286511411_3980_000_4000_000
527
+ segment-11588853832866011756_2184_462_2204_462
528
+ segment-5495302100265783181_80_000_100_000
529
+ segment-3711598698808133144_2060_000_2080_000
530
+ segment-16372013171456210875_5631_040_5651_040
531
+ segment-15062351272945542584_5921_360_5941_360
532
+ segment-6935841224766931310_2770_310_2790_310
533
+ segment-3644145307034257093_3000_400_3020_400
534
+ segment-12012663867578114640_820_000_840_000
535
+ segment-16080705915014211452_620_000_640_000
536
+ segment-7912728502266478772_1202_200_1222_200
537
+ segment-1857377326903987736_80_000_100_000
538
+ segment-3919438171935923501_280_000_300_000
539
+ segment-5129792222840846899_2145_000_2165_000
540
+ segment-12974838039736660070_4586_990_4606_990
541
+ segment-2342300897175196823_1179_360_1199_360
542
+ segment-7344536712079322768_1360_000_1380_000
543
+ segment-3338044015505973232_1804_490_1824_490
544
+ segment-13181198025433053194_2620_770_2640_770
545
+ segment-16504318334867223853_480_000_500_000
546
+ segment-5973788713714489548_2179_770_2199_770
547
+ segment-3078075798413050298_890_370_910_370
548
+ segment-6561206763751799279_2348_600_2368_600
549
+ segment-3451017128488170637_5280_000_5300_000
550
+ segment-13207915841618107559_2980_000_3000_000
551
+ segment-17912777897400903477_2047_500_2067_500
552
+ segment-8126606965364870152_985_090_1005_090
553
+ segment-6433401807220119698_4560_000_4580_000
554
+ segment-7837172662136597262_1140_000_1160_000
555
+ segment-1305342127382455702_3720_000_3740_000
556
+ segment-2919021496271356282_2300_000_2320_000
557
+ segment-16735938448970076374_1126_430_1146_430
558
+ segment-8806931859563747931_1160_000_1180_000
559
+ segment-7554208726220851641_380_000_400_000
560
+ segment-14940138913070850675_5755_330_5775_330
561
+ segment-5076950993715916459_3265_000_3285_000
562
+ segment-16797668128356194527_2430_390_2450_390
563
+ segment-7038362761309539946_4207_130_4227_130
564
+ segment-14766384747691229841_6315_730_6335_730
565
+ segment-10876852935525353526_1640_000_1660_000
566
+ segment-14276116893664145886_1785_080_1805_080
567
+ segment-8487809726845917818_4779_870_4799_870
568
+ segment-14358192009676582448_3396_400_3416_400
569
+ segment-3563349510410371738_7465_000_7485_000
570
+ segment-12212767626682531382_2100_150_2120_150
571
+ segment-6559997992780479765_1039_000_1059_000
572
+ segment-3698685523057788592_4303_630_4323_630
573
+ segment-1432918953215186312_5101_320_5121_320
574
+ segment-13585809231635721258_1910_770_1930_770
575
+ segment-8345535260120974350_1980_000_2000_000
576
+ segment-4138614210962611770_2459_360_2479_360
577
+ segment-15943938987133888575_2767_300_2787_300
578
+ segment-15036582848618865396_3752_830_3772_830
579
+ segment-11252086830380107152_1540_000_1560_000
580
+ segment-141184560845819621_10582_560_10602_560
581
+ segment-6150191934425217908_2747_800_2767_800
582
+ segment-14430914081327266277_6480_000_6500_000
583
+ segment-8543158371164842559_4131_530_4151_530
584
+ segment-10212406498497081993_5300_000_5320_000
585
+ segment-14503113925613619599_975_506_995_506
586
+ segment-16341778301681295961_178_800_198_800
587
+ segment-14734824171146590110_880_000_900_000
588
+ segment-17752423643206316420_920_850_940_850
589
+ segment-13005562150845909564_3141_360_3161_360
590
+ segment-16470190748368943792_4369_490_4389_490
591
+ segment-12281202743097872109_3387_370_3407_370
592
+ segment-4013698638848102906_7757_240_7777_240
593
+ segment-3276301746183196185_436_450_456_450
594
+ segment-13519445614718437933_4060_000_4080_000
595
+ segment-1891390218766838725_4980_000_5000_000
596
+ segment-200287570390499785_2102_000_2122_000
597
+ segment-16473613811052081539_1060_000_1080_000
598
+ segment-3928923269768424494_3060_000_3080_000
599
+ segment-2475623575993725245_400_000_420_000
600
+ segment-4305539677513798673_2200_000_2220_000
601
+ segment-575209926587730008_3880_000_3900_000
602
+ segment-11017034898130016754_697_830_717_830
603
+ segment-8796914080594559459_4284_170_4304_170
604
+ segment-16552287303455735122_7587_380_7607_380
605
+ segment-1907783283319966632_3221_000_3241_000
606
+ segment-5525943706123287091_4100_000_4120_000
607
+ segment-12856053589272984699_1020_000_1040_000
608
+ segment-16153607877566142572_2262_000_2282_000
609
+ segment-2217043033232259972_2720_000_2740_000
610
+ segment-7466751345307077932_585_000_605_000
611
+ segment-5691636094473163491_6889_470_6909_470
612
+ segment-8859409804103625626_2760_000_2780_000
613
+ segment-6290334089075942139_1340_000_1360_000
614
+ segment-2698953791490960477_2660_000_2680_000
615
+ segment-3665329186611360820_2329_010_2349_010
616
+ segment-268278198029493143_1400_000_1420_000
617
+ segment-14466332043440571514_6530_560_6550_560
618
+ segment-8633296376655504176_514_000_534_000
619
+ segment-15367782110311024266_2103_310_2123_310
620
+ segment-15090871771939393635_1266_320_1286_320
621
+ segment-7850521592343484282_4576_090_4596_090
622
+ segment-12200383401366682847_2552_140_2572_140
623
+ segment-2400780041057579262_660_000_680_000
624
+ segment-12988666890418932775_5516_730_5536_730
625
+ segment-14073491244121877213_4066_056_4086_056
626
+ segment-10082223140073588526_6140_000_6160_000
627
+ segment-17782258508241656695_1354_000_1374_000
628
+ segment-4191035366928259953_1732_708_1752_708
629
+ segment-4967385055468388261_720_000_740_000
630
+ segment-2025831330434849594_1520_000_1540_000
631
+ segment-10975280749486260148_940_000_960_000
632
+ segment-7187601925763611197_4384_300_4404_300
633
+ segment-8513241054672631743_115_960_135_960
634
+ segment-16652690380969095006_2580_000_2600_000
635
+ segment-13517115297021862252_2680_000_2700_000
636
+ segment-2107164705125601090_3920_000_3940_000
637
+ segment-10226164909075980558_180_000_200_000
638
+ segment-2323851946122476774_7240_000_7260_000
639
+ segment-7920326980177504058_2454_310_2474_310
640
+ segment-7799671367768576481_260_000_280_000
641
+ segment-17874036087982478403_733_674_753_674
642
+ segment-2273990870973289942_4009_680_4029_680
643
+ segment-2330686858362435307_603_210_623_210
644
+ segment-1473681173028010305_1780_000_1800_000
645
+ segment-10724020115992582208_7660_400_7680_400
646
+ segment-10241508783381919015_2889_360_2909_360
647
+ segment-6674547510992884047_1560_000_1580_000
648
+ segment-6771922013310347577_4249_290_4269_290
649
+ segment-17356174167372765800_1720_000_1740_000
650
+ segment-3417928259332148981_7018_550_7038_550
651
+ segment-6148393791213790916_4960_000_4980_000
652
+ segment-7007702792982559244_4400_000_4420_000
653
+ segment-12956664801249730713_2840_000_2860_000
654
+ segment-14753089714893635383_873_600_893_600
655
+ segment-4986495627634617319_2980_000_3000_000
656
+ segment-10206293520369375008_2796_800_2816_800
657
+ segment-3927294516406132977_792_740_812_740
658
+ segment-13940755514149579648_821_157_841_157
659
+ segment-5572351910320677279_3980_000_4000_000
660
+ segment-580580436928611523_792_500_812_500
661
+ segment-12900898236728415654_1906_686_1926_686
662
+ segment-18136695827203527782_2860_000_2880_000
663
+ segment-18244334282518155052_2360_000_2380_000
664
+ segment-4384676699661561426_1662_670_1682_670
665
+ segment-6386303598440879824_1520_000_1540_000
666
+ segment-13667377240304615855_500_000_520_000
667
+ segment-175830748773502782_1580_000_1600_000
668
+ segment-6763005717101083473_3880_000_3900_000
669
+ segment-9175749307679169289_5933_260_5953_260
670
+ segment-14739149465358076158_4740_000_4760_000
671
+ segment-7732779227944176527_2120_000_2140_000
672
+ segment-3126522626440597519_806_440_826_440
673
+ segment-16229547658178627464_380_000_400_000
674
+ segment-17612470202990834368_2800_000_2820_000
675
+ segment-12102100359426069856_3931_470_3951_470
676
+ segment-30779396576054160_1880_000_1900_000
677
+ segment-8907419590259234067_1960_000_1980_000
678
+ segment-15021599536622641101_556_150_576_150
679
+ segment-11450298750351730790_1431_750_1451_750
680
+ segment-17539775446039009812_440_000_460_000
681
+ segment-346889320598157350_798_187_818_187
682
+ segment-9443948810903981522_6538_870_6558_870
683
+ segment-8956556778987472864_3404_790_3424_790
684
+ segment-4195774665746097799_7300_960_7320_960
685
+ segment-4854173791890687260_2880_000_2900_000
686
+ segment-6707256092020422936_2352_392_2372_392
687
+ segment-14383152291533557785_240_000_260_000
688
+ segment-2551868399007287341_3100_000_3120_000
689
+ segment-12866817684252793621_480_000_500_000
690
+ segment-3015436519694987712_1300_000_1320_000
691
+ segment-9024872035982010942_2578_810_2598_810
692
+ segment-17962792089966876718_2210_933_2230_933
693
+ segment-3577352947946244999_3980_000_4000_000
694
+ segment-5832416115092350434_60_000_80_000
695
+ segment-7253952751374634065_1100_000_1120_000
696
+ segment-17344036177686610008_7852_160_7872_160
697
+ segment-14165166478774180053_1786_000_1806_000
698
+ segment-14333744981238305769_5658_260_5678_260
699
+ segment-8302000153252334863_6020_000_6040_000
700
+ segment-6074871217133456543_1000_000_1020_000
701
+ segment-8506432817378693815_4860_000_4880_000
702
+ segment-5990032395956045002_6600_000_6620_000
703
+ segment-366934253670232570_2229_530_2249_530
704
+ segment-13573359675885893802_1985_970_2005_970
705
+ segment-13469905891836363794_4429_660_4449_660
706
+ segment-89454214745557131_3160_000_3180_000
707
+ segment-12831741023324393102_2673_230_2693_230
708
+ segment-1105338229944737854_1280_000_1300_000
709
+ segment-15611747084548773814_3740_000_3760_000
710
+ segment-933621182106051783_4160_000_4180_000
711
+ segment-2834723872140855871_1615_000_1635_000
712
+ segment-10837554759555844344_6525_000_6545_000
713
+ segment-12306251798468767010_560_000_580_000
714
+ segment-12374656037744638388_1412_711_1432_711
715
+ segment-15224741240438106736_960_000_980_000
716
+ segment-1943605865180232897_680_000_700_000
717
+ segment-17065833287841703_2980_000_3000_000
718
+ segment-2736377008667623133_2676_410_2696_410
719
+ segment-5183174891274719570_3464_030_3484_030
720
+ segment-1457696187335927618_595_027_615_027
721
+ segment-10289507859301986274_4200_000_4220_000
722
+ segment-15724298772299989727_5386_410_5406_410
723
+ segment-17860546506509760757_6040_000_6060_000
724
+ segment-14127943473592757944_2068_000_2088_000
725
+ segment-18446264979321894359_3700_000_3720_000
726
+ segment-4575389405178805994_4900_000_4920_000
727
+ segment-2105808889850693535_2295_720_2315_720
728
+ segment-2367305900055174138_1881_827_1901_827
729
+ segment-8133434654699693993_1162_020_1182_020
730
+ segment-14262448332225315249_1280_000_1300_000
731
+ segment-15496233046893489569_4551_550_4571_550
732
+ segment-13178092897340078601_5118_604_5138_604
733
+ segment-10868756386479184868_3000_000_3020_000
734
+ segment-5302885587058866068_320_000_340_000
735
+ segment-11048712972908676520_545_000_565_000
736
+ segment-7119831293178745002_1094_720_1114_720
737
+ segment-5372281728627437618_2005_000_2025_000
738
+ segment-902001779062034993_2880_000_2900_000
739
+ segment-14956919859981065721_1759_980_1779_980
740
+ segment-7493781117404461396_2140_000_2160_000
741
+ segment-3731719923709458059_1540_000_1560_000
742
+ segment-9472420603764812147_850_000_870_000
743
+ segment-13356997604177841771_3360_000_3380_000
744
+ segment-4423389401016162461_4235_900_4255_900
745
+ segment-2094681306939952000_2972_300_2992_300
746
+ segment-14244512075981557183_1226_840_1246_840
747
+ segment-14486517341017504003_3406_349_3426_349
748
+ segment-17135518413411879545_1480_000_1500_000
749
+ segment-1024360143612057520_3580_000_3600_000
750
+ segment-2506799708748258165_6455_000_6475_000
751
+ segment-4759225533437988401_800_000_820_000
752
+ segment-13299463771883949918_4240_000_4260_000
753
+ segment-8331804655557290264_4351_740_4371_740
754
+ segment-11434627589960744626_4829_660_4849_660
755
+ segment-260994483494315994_2797_545_2817_545
756
+ segment-4612525129938501780_340_000_360_000
757
+ segment-9243656068381062947_1297_428_1317_428
758
+ segment-16767575238225610271_5185_000_5205_000
759
+ segment-17152649515605309595_3440_000_3460_000
760
+ segment-662188686397364823_3248_800_3268_800
761
+ segment-272435602399417322_2884_130_2904_130
762
+ segment-17703234244970638241_220_000_240_000
763
+ segment-15096340672898807711_3765_000_3785_000
764
+ segment-18252111882875503115_378_471_398_471
765
+ segment-9231652062943496183_1740_000_1760_000
766
+ segment-13941626351027979229_3363_930_3383_930
767
+ segment-14811410906788672189_373_113_393_113
768
+ segment-8079607115087394458_1240_000_1260_000
769
+ segment-3915587593663172342_10_000_30_000
770
+ segment-11406166561185637285_1753_750_1773_750
771
+ segment-16751706457322889693_4475_240_4495_240
772
+ segment-15948509588157321530_7187_290_7207_290
773
+ segment-4690718861228194910_1980_000_2000_000
774
+ segment-4013125682946523088_3540_000_3560_000
775
+ segment-5373876050695013404_3817_170_3837_170
776
+ segment-2335854536382166371_2709_426_2729_426
777
+ segment-6183008573786657189_5414_000_5434_000
778
+ segment-12940710315541930162_2660_000_2680_000
779
+ segment-3039251927598134881_1240_610_1260_610
780
+ segment-7988627150403732100_1487_540_1507_540
781
+ segment-6680764940003341232_2260_000_2280_000
782
+ segment-18333922070582247333_320_280_340_280
783
+ segment-8137195482049459160_3100_000_3120_000
784
+ segment-4490196167747784364_616_569_636_569
785
+ segment-7163140554846378423_2717_820_2737_820
786
+ segment-7932945205197754811_780_000_800_000
787
+ segment-1906113358876584689_1359_560_1379_560
788
+ segment-17244566492658384963_2540_000_2560_000
789
+ segment-5574146396199253121_6759_360_6779_360
790
+ segment-271338158136329280_2541_070_2561_070
791
+ segment-18331704533904883545_1560_000_1580_000
792
+ segment-1071392229495085036_1844_790_1864_790
793
+ segment-15959580576639476066_5087_580_5107_580
794
+ segment-10203656353524179475_7625_000_7645_000
795
+ segment-11616035176233595745_3548_820_3568_820
796
+ segment-17626999143001784258_2760_000_2780_000
797
+ segment-3651243243762122041_3920_000_3940_000
798
+ segment-6324079979569135086_2372_300_2392_300
799
+ segment-8888517708810165484_1549_770_1569_770
800
+ segment-6637600600814023975_2235_000_2255_000
801
+ segment-1505698981571943321_1186_773_1206_773
802
+ segment-7650923902987369309_2380_000_2400_000
803
+ segment-5847910688643719375_180_000_200_000
804
+ segment-7799643635310185714_680_000_700_000
805
+ segment-18045724074935084846_6615_900_6635_900
806
+ segment-10247954040621004675_2180_000_2200_000
807
+ segment-6001094526418694294_4609_470_4629_470
808
+ segment-6161542573106757148_585_030_605_030
809
+ segment-3077229433993844199_1080_000_1100_000
810
+ segment-13336883034283882790_7100_000_7120_000
811
+ segment-4764167778917495793_860_000_880_000
812
+ segment-12358364923781697038_2232_990_2252_990
813
+ segment-9265793588137545201_2981_960_3001_960
814
+ segment-12820461091157089924_5202_916_5222_916
815
+ segment-17763730878219536361_3144_635_3164_635
816
+ segment-12657584952502228282_3940_000_3960_000
817
+ segment-967082162553397800_5102_900_5122_900
818
+ segment-9041488218266405018_6454_030_6474_030
819
+ segment-16213317953898915772_1597_170_1617_170
820
+ segment-9579041874842301407_1300_000_1320_000
821
+ segment-15488266120477489949_3162_920_3182_920
822
+ segment-15396462829361334065_4265_000_4285_000
823
+ segment-17694030326265859208_2340_000_2360_000
824
+ segment-14663356589561275673_935_195_955_195
825
+ segment-9114112687541091312_1100_000_1120_000
826
+ segment-1331771191699435763_440_000_460_000
827
+ segment-17136314889476348164_979_560_999_560
828
+ segment-14931160836268555821_5778_870_5798_870
829
+ segment-5772016415301528777_1400_000_1420_000
830
+ segment-4246537812751004276_1560_000_1580_000
831
+ segment-11037651371539287009_77_670_97_670
832
+ segment-14624061243736004421_1840_000_1860_000
833
+ segment-10689101165701914459_2072_300_2092_300
834
+ segment-4409585400955983988_3500_470_3520_470
835
+ segment-8845277173853189216_3828_530_3848_530
836
+ segment-8679184381783013073_7740_000_7760_000
837
+ segment-10335539493577748957_1372_870_1392_870
838
+ segment-5289247502039512990_2640_000_2660_000
839
+ segment-18024188333634186656_1566_600_1586_600
840
+ segment-13982731384839979987_1680_000_1700_000
841
+ segment-18305329035161925340_4466_730_4486_730
842
+ segment-11660186733224028707_420_000_440_000
843
+ segment-14107757919671295130_3546_370_3566_370
844
+ segment-1464917900451858484_1960_000_1980_000
845
+ segment-11387395026864348975_3820_000_3840_000
846
+ segment-11901761444769610243_556_000_576_000
847
+ segment-14300007604205869133_1160_000_1180_000
848
+ segment-9164052963393400298_4692_970_4712_970
849
+ segment-4426410228514970291_1620_000_1640_000
850
+ segment-4816728784073043251_5273_410_5293_410
scripts/translate.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding: utf-8
3
+
4
+ import pathlib
5
+ import torch
6
+ import yaml
7
+ import sys
8
+ import os
9
+
10
+ from math import pi
11
+ from PIL import Image
12
+ from munch import Munch
13
+ from argparse import ArgumentParser as AP
14
+ from torchvision.transforms import ToPILImage, ToTensor
15
+
16
+ p_mod = str(pathlib.Path('.').absolute())
17
+ sys.path.append(p_mod.replace("/scripts", ""))
18
+
19
+ from data.base_dataset import get_transform
20
+ from networks import create_model
21
+
22
+ device='cuda' if torch.cuda.is_available() else 'cpu'
23
+ def printProgressBar(i, max, postText):
24
+ n_bar = 20 # size of progress bar
25
+ j = i / max
26
+ sys.stdout.write('\r')
27
+ sys.stdout.write(f"[{'=' * int(n_bar * j):{n_bar}s}] {int(100 * j)}% {postText}")
28
+ sys.stdout.flush()
29
+
30
+ def inference(model, opt, A_path, phi):
31
+ t_phi = torch.tensor(phi)
32
+ A_img = Image.open(A_path).convert('RGB')
33
+ A = get_transform(opt, convert=False)(A_img)
34
+ img_real = (((ToTensor()(A)) * 2) - 1).unsqueeze(0)
35
+ img_fake = model.forward(img_real.to(device), t_phi.to(device))
36
+
37
+ return ToPILImage()((img_fake[0].cpu() + 1) / 2)
38
+
39
+ def main(cmdline):
40
+ if cmdline.checkpoint is None:
41
+ # Load names of directories inside /logs
42
+ p = pathlib.Path('./logs')
43
+ list_run_id = [x.name for x in p.iterdir() if x.is_dir()]
44
+
45
+ RUN_ID = list_run_id[0]
46
+ root_dir = os.path.join('logs', RUN_ID, 'tensorboard', 'default', 'version_0')
47
+ p = pathlib.Path(root_dir + '/checkpoints')
48
+ # Load a list of checkpoints, use the last one by default
49
+ list_checkpoint = [x.name for x in p.iterdir() if 'iter' in x.name]
50
+ list_checkpoint.sort(reverse=True, key=lambda x: int(x.split('_')[1].split('.pth')[0]))
51
+
52
+ CHECKPOINT = list_checkpoint[0]
53
+ else:
54
+ RUN_ID = os.path.basename(cmdline.checkpoint.split("/tensorboard")[0])
55
+ root_dir = os.path.dirname(cmdline.checkpoint.split("/checkpoints")[0])
56
+ CHECKPOINT = os.path.basename(cmdline.checkpoint.split('checkpoints/')[1])
57
+
58
+ print(f"Load checkpoint {CHECKPOINT} from {RUN_ID}")
59
+
60
+ # Load parameters
61
+ with open(os.path.join(root_dir, 'hparams.yaml')) as cfg_file:
62
+ opt = Munch(yaml.safe_load(cfg_file))
63
+
64
+ opt.no_flip = True
65
+ # Load parameters to the model, load the checkpoint
66
+ model = create_model(opt)
67
+ model = model.load_from_checkpoint(os.path.join(root_dir, 'checkpoints', CHECKPOINT))
68
+ # Transfer the model to the GPU
69
+ model.to(device)
70
+
71
+ # Load paths of all files contained in /Day
72
+ p = pathlib.Path(cmdline.load_path)
73
+ dataset_paths = [str(x.relative_to(cmdline.load_path)) for x in p.iterdir()]
74
+ dataset_paths.sort()
75
+
76
+ # Load only files that contained the given string
77
+ sequence_name = []
78
+ if cmdline.sequence is not None:
79
+ for file in dataset_paths:
80
+ if cmdline.sequence in file:
81
+ sequence_name.append(file)
82
+ else:
83
+ sequence_name = dataset_paths
84
+
85
+ # Create directory if it doesn't exist
86
+ os.makedirs(cmdline.save_path, exist_ok=True)
87
+
88
+ i = 0
89
+ for path_img in sequence_name:
90
+ printProgressBar(i, len(sequence_name), path_img)
91
+ # Loop over phi values from 0 to 2pi with increments of 0.2
92
+ for phi in torch.arange(0, 2 * pi, 0.2):
93
+ # Forward our image into the model with the specified ɸ
94
+ out_img = inference(model, opt, os.path.join(cmdline.load_path, path_img), phi)
95
+ # Saving the generated image with phi in the filename
96
+ save_path = os.path.join(cmdline.save_path, f"{os.path.splitext(os.path.basename(path_img))[0]}_phi_{phi:.1f}.png")
97
+ out_img.save(save_path)
98
+ i += 1
99
+
100
+ if __name__ == '__main__':
101
+ ap = AP()
102
+ ap.add_argument('--load_path', default='/datasets/waymo_comogan/val/sunny/Day/', type=str, help='Set a path to load the dataset to translate')
103
+ ap.add_argument('--save_path', default='/CoMoGan/images/', type=str, help='Set a path to save the dataset')
104
+ ap.add_argument('--sequence', default=None, type=str, help='Set a sequence, will only use the image that contained the string specified')
105
+ ap.add_argument('--checkpoint', default=None, type=str, help='Set a path to the checkpoint that you want to use')
106
+ ap.add_argument('--phi', default=0.0, type=float, help='Choose the angle of the sun 𝜙 between [0,2𝜋], which maps to a sun elevation ∈ [+30◦,−40◦]')
107
+ main(ap.parse_args())
108
+ print("\n")
train.py ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import time
2
+ import os
3
+ os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Disables tensorflow loggings
4
+
5
+ from options import get_options
6
+ from data import create_dataset
7
+ from networks import create_model, get_model_options
8
+ from argparse import ArgumentParser as AP
9
+
10
+ import pytorch_lightning as pl
11
+ from pytorch_lightning.loggers import TensorBoardLogger
12
+
13
+ from util.callbacks import LogAndCheckpointEveryNSteps
14
+ from human_id import generate_id
15
+
16
+ def start(cmdline):
17
+
18
+ pl.trainer.seed_everything(cmdline.seed)
19
+ opt = get_options(cmdline)
20
+
21
+ dataset = create_dataset(opt) # create a dataset given opt.dataset_mode and other options
22
+ model = create_model(opt) # create a model given opt.model and other options
23
+
24
+ callbacks = []
25
+
26
+ logger = None
27
+ if not cmdline.debug:
28
+ root_dir = os.path.join('logs/', generate_id()) if cmdline.id == None else os.path.join('logs/', cmdline.id)
29
+ logger = TensorBoardLogger(save_dir=os.path.join(root_dir, 'tensorboard'))
30
+ logger.log_hyperparams(opt)
31
+ callbacks.append(LogAndCheckpointEveryNSteps(save_step_frequency=opt.save_latest_freq,
32
+ viz_frequency=opt.display_freq,
33
+ log_frequency=opt.print_freq))
34
+ else:
35
+ root_dir = os.path.join('/tmp', generate_id())
36
+
37
+ precision = 16 if cmdline.mixed_precision else 32
38
+
39
+ trainer = pl.Trainer(default_root_dir=os.path.join(root_dir, 'checkpoints'), callbacks=callbacks,
40
+ gpus=cmdline.gpus, logger=logger, precision=precision, amp_level='01')
41
+ trainer.fit(model, dataset)
42
+
43
+
44
+ if __name__ == '__main__':
45
+ ap = AP()
46
+ ap.add_argument('--id', default=None, type=str, help='Set an existing uuid to resume a training')
47
+ ap.add_argument('--debug', default=False, action='store_true', help='Disables experiment saving')
48
+ ap.add_argument('--gpus', default=[0], type=int, nargs='+', help='gpus to train on')
49
+ ap.add_argument('--model', default='comomunit', type=str, help='Choose model for training')
50
+ ap.add_argument('--data_importer', default='day2timelapse', type=str, help='Module name of the dataset importer')
51
+ ap.add_argument('--path_data', default='/datasets/waymo_comogan/train/', type=str, help='Path to the dataset')
52
+ ap.add_argument('--learning_rate', default=0.0001, type=float, help='Learning rate')
53
+ ap.add_argument('--scheduler_policy', default='step', type=str, help='Scheduler policy')
54
+ ap.add_argument('--decay_iters_step', default=200000, type=int, help='Decay iterations step')
55
+ ap.add_argument('--decay_step_gamma', default=0.5, type=float, help='Decay step gamma')
56
+ ap.add_argument('--seed', default=1, type=int, help='Random seed')
57
+ ap.add_argument('--mixed_precision', default=False, action='store_true', help='Use mixed precision to reduce memory usage')
58
+ start(ap.parse_args())
59
+
util/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """This package includes a miscellaneous collection of useful helper functions."""
2
+ from torch.nn import DataParallel
3
+
4
+ import sys
5
+
6
+ class DataParallelPassthrough(DataParallel):
7
+ def __getattr__(self, name):
8
+ try:
9
+ return super().__getattr__(name)
10
+ except AttributeError:
11
+ return getattr(self.module, name)
util/callbacks.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pytorch_lightning as pl
2
+ from hashlib import md5
3
+ import os
4
+
5
+ class LogAndCheckpointEveryNSteps(pl.Callback):
6
+ """
7
+ Save a checkpoint/logs every N steps
8
+ """
9
+
10
+ def __init__(
11
+ self,
12
+ save_step_frequency=50,
13
+ viz_frequency=5,
14
+ log_frequency=5
15
+ ):
16
+ self.save_step_frequency = save_step_frequency
17
+ self.viz_frequency = viz_frequency
18
+ self.log_frequency = log_frequency
19
+
20
+ def on_batch_end(self, trainer: pl.Trainer, _):
21
+ global_step = trainer.global_step
22
+
23
+ # Saving checkpoint
24
+ if global_step % self.save_step_frequency == 0 and global_step != 0:
25
+ filename = "iter_{}.pth".format(global_step)
26
+ ckpt_path = os.path.join(trainer.checkpoint_callback.dirpath, filename)
27
+ trainer.save_checkpoint(ckpt_path)
28
+
29
+ # Logging losses
30
+ if global_step % self.log_frequency == 0 and global_step != 0:
31
+ trainer.model.log_current_losses()
32
+
33
+ # Image visualization
34
+ if global_step % self.viz_frequency == 0 and global_step != 0:
35
+ trainer.model.log_current_visuals()
36
+
37
+ class Hash(pl.Callback):
38
+
39
+ def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
40
+ if batch_idx == 99:
41
+ print("Hash " + md5(pl_module.state_dict()["netG_B.dec.model.4.conv.weight"].cpu().detach().numpy()).hexdigest())
zurich_000116_000019_leftImg8bit_1.png ADDED

Git LFS Details

  • SHA256: 6a8fb9983859a41887309ff0548c0677207e3c022764e0fa22ab0366da247985
  • Pointer size: 131 Bytes
  • Size of remote file: 308 kB