AshanGimhana
commited on
Commit
•
bfec0b0
1
Parent(s):
08e8f6f
Delete README.md
Browse files
README.md
DELETED
@@ -1,330 +0,0 @@
|
|
1 |
-
# Only a Matter of Style: Age Transformation Using a Style-Based Regression Model (SIGGRAPH 2021)
|
2 |
-
|
3 |
-
> The task of age transformation illustrates the change of an individual's appearance over time. Accurately modeling this complex transformation over an input facial image is extremely challenging as it requires making convincing and possibly large changes to facial features and head shape, while still preserving the input identity. In this work, we present an image-to-image translation method that learns to directly encode real facial images into the latent space of a pre-trained unconditional GAN (e.g., StyleGAN) subject to a given aging shift. We employ a pre-trained age regression network used to explicitly guide the encoder to generate the latent codes corresponding to the desired age. In this formulation, our method approaches the continuous aging process as a regression task between the input age and desired target age, providing fine-grained control on the generated image. Moreover, unlike other approaches that operate solely in the latent space using a prior on the path controlling age, our method learns a more disentangled, non-linear path. We demonstrate that the end-to-end nature of our approach, coupled with the rich semantic latent space of StyleGAN, allows for further editing of the generated images. Qualitative and quantitative evaluations show the advantages of our method compared to state-of-the-art approaches.
|
4 |
-
|
5 |
-
<a href="https://arxiv.org/abs/2102.02754"><img src="https://img.shields.io/badge/arXiv-2008.00951-b31b1b.svg" height=22.5></a>
|
6 |
-
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" height=22.5></a>
|
7 |
-
|
8 |
-
<a href="https://www.youtube.com/watch?v=zDTUbtmUbG8"><img src="https://img.shields.io/static/v1?label=Two Minute Papers&message=SAM Video&color=red" height=22.5></a>
|
9 |
-
<a href="https://youtu.be/X_pYC_LtBFw"><img src="https://img.shields.io/static/v1?label=SIGGRAPH 2021 &message=5 Minute Video&color=red" height=22.5></a>
|
10 |
-
<a href="https://replicate.ai/yuval-alaluf/sam"><img src="https://img.shields.io/static/v1?label=Replicate&message=Demo and Docker Image&color=darkgreen" height=22.5></a>
|
11 |
-
|
12 |
-
|
13 |
-
Inference Notebook: <a href="http://colab.research.google.com/github/yuval-alaluf/SAM/blob/master/notebooks/inference_playground.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" height=22.5></a>
|
14 |
-
Animation Notebook: <a href="http://colab.research.google.com/github/yuval-alaluf/SAM/blob/master/notebooks/animation_inference_playground.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" height=22.5></a>
|
15 |
-
|
16 |
-
|
17 |
-
<p align="center">
|
18 |
-
<img src="docs/teaser.jpeg" width="800px"/>
|
19 |
-
</p>
|
20 |
-
|
21 |
-
## Description
|
22 |
-
Official Implementation of our Style-based Age Manipulation (SAM) paper for both training and evaluation. SAM
|
23 |
-
allows modeling fine-grained age transformation using a single input facial image
|
24 |
-
|
25 |
-
<p align="center">
|
26 |
-
<img src="docs/2195.jpg" width="800px"/>
|
27 |
-
<img src="docs/1936.jpg" width="800px"/>
|
28 |
-
</p>
|
29 |
-
|
30 |
-
## Table of Contents
|
31 |
-
* [Getting Started](#getting-started)
|
32 |
-
+ [Prerequisites](#prerequisites)
|
33 |
-
+ [Installation](#installation)
|
34 |
-
* [Pretrained Models](#pretrained-models)
|
35 |
-
* [Training](#training)
|
36 |
-
+ [Preparing your Data](#preparing-your-data)
|
37 |
-
+ [Training SAM](#training-sam)
|
38 |
-
+ [Additional Notes](#additional-notes)
|
39 |
-
* [Notebooks](#notebooks)
|
40 |
-
+ [Inference Notebook](#inference-notebook)
|
41 |
-
+ [MP4 Notebook](#mp4-notebook)
|
42 |
-
* [Testing](#testing)
|
43 |
-
+ [Inference](#inference)
|
44 |
-
+ [Side-by-Side Inference](#side-by-side-inference)
|
45 |
-
+ [Reference-Guided Inference](#reference-guided-inference)
|
46 |
-
+ [Style Mixing](#style-mixing)
|
47 |
-
* [Repository structure](#repository-structure)
|
48 |
-
* [Credits](#credits)
|
49 |
-
* [Acknowledgments](#acknowledgments)
|
50 |
-
* [Citation](#citation)
|
51 |
-
|
52 |
-
|
53 |
-
## Getting Started
|
54 |
-
### Prerequisites
|
55 |
-
- Linux or macOS
|
56 |
-
- NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)
|
57 |
-
- Python 3
|
58 |
-
|
59 |
-
### Installation
|
60 |
-
- Dependencies:
|
61 |
-
We recommend running this repository using [Anaconda](https://docs.anaconda.com/anaconda/install/).
|
62 |
-
All dependencies for defining the environment are provided in `environment/sam_env.yaml`.
|
63 |
-
|
64 |
-
## Pretrained Models
|
65 |
-
Please download the pretrained aging model from the following links.
|
66 |
-
|
67 |
-
| Path | Description
|
68 |
-
| :--- | :----------
|
69 |
-
|[SAM](https://drive.google.com/file/d/1XyumF6_fdAxFmxpFcmPf-q84LU_22EMC/view?usp=sharing) | SAM trained on the FFHQ dataset for age transformation.
|
70 |
-
|
71 |
-
You can run this to download it to the right place:
|
72 |
-
|
73 |
-
```
|
74 |
-
mkdir pretrained_models
|
75 |
-
pip install gdown
|
76 |
-
gdown "https://drive.google.com/u/0/uc?id=1XyumF6_fdAxFmxpFcmPf-q84LU_22EMC&export=download" -O pretrained_models/sam_ffhq_aging.pt
|
77 |
-
wget "https://github.com/italojs/facial-landmarks-recognition/raw/master/shape_predictor_68_face_landmarks.dat"
|
78 |
-
```
|
79 |
-
|
80 |
-
In addition, we provide various auxiliary models needed for training your own SAM model from scratch.
|
81 |
-
This includes the pretrained pSp encoder model for generating the encodings of the input image and the aging classifier
|
82 |
-
used to compute the aging loss during training.
|
83 |
-
|
84 |
-
| Path | Description
|
85 |
-
| :--- | :----------
|
86 |
-
|[pSp Encoder](https://drive.google.com/file/d/1bMTNWkh5LArlaWSc_wa8VKyq2V42T2z0/view?usp=sharing) | pSp taken from [pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel) trained on the FFHQ dataset for StyleGAN inversion.
|
87 |
-
|[FFHQ StyleGAN](https://drive.google.com/file/d/1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT/view?usp=sharing) | StyleGAN model pretrained on FFHQ taken from [rosinality](https://github.com/rosinality/stylegan2-pytorch) with 1024x1024 output resolution.
|
88 |
-
|[IR-SE50 Model](https://drive.google.com/file/d/1KW7bjndL3QG3sxBbZxreGHigcCCpsDgn/view?usp=sharing) | Pretrained IR-SE50 model taken from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) for use in our ID loss during training.
|
89 |
-
|[VGG Age Classifier](https://drive.google.com/file/d/1atzjZm_dJrCmFWCqWlyspSpr3nI6Evsh/view?usp=sharing) | VGG age classifier from DEX and fine-tuned on the FFHQ-Aging dataset for use in our aging loss
|
90 |
-
|
91 |
-
By default, we assume that all auxiliary models are downloaded and saved to the directory `pretrained_models`.
|
92 |
-
However, you may use your own paths by changing the necessary values in `configs/path_configs.py`.
|
93 |
-
|
94 |
-
## Training
|
95 |
-
### Preparing your Data
|
96 |
-
Please refer to `configs/paths_config.py` to define the necessary data paths and model paths for training and inference.
|
97 |
-
Then, refer to `configs/data_configs.py` to define the source/target data paths for the train and test sets as well as the
|
98 |
-
transforms to be used for training and inference.
|
99 |
-
|
100 |
-
As an example, we can first go to `configs/paths_config.py` and define:
|
101 |
-
```
|
102 |
-
dataset_paths = {
|
103 |
-
'ffhq': '/path/to/ffhq/images256x256'
|
104 |
-
'celeba_test': '/path/to/CelebAMask-HQ/test_img',
|
105 |
-
}
|
106 |
-
```
|
107 |
-
Then, in `configs/data_configs.py`, we define:
|
108 |
-
```
|
109 |
-
DATASETS = {
|
110 |
-
'ffhq_aging': {
|
111 |
-
'transforms': transforms_config.AgingTransforms,
|
112 |
-
'train_source_root': dataset_paths['ffhq'],
|
113 |
-
'train_target_root': dataset_paths['ffhq'],
|
114 |
-
'test_source_root': dataset_paths['celeba_test'],
|
115 |
-
'test_target_root': dataset_paths['celeba_test'],
|
116 |
-
}
|
117 |
-
}
|
118 |
-
```
|
119 |
-
When defining the datasets for training and inference, we will use the values defined in the above dictionary.
|
120 |
-
|
121 |
-
|
122 |
-
### Training SAM
|
123 |
-
The main training script can be found in `scripts/train.py`.
|
124 |
-
Intermediate training results are saved to `opts.exp_dir`. This includes checkpoints, train outputs, and test outputs.
|
125 |
-
Additionally, if you have tensorboard installed, you can visualize tensorboard logs in `opts.exp_dir/logs`.
|
126 |
-
|
127 |
-
Training SAM with the settings used in the paper can be done by running the following command:
|
128 |
-
```
|
129 |
-
python scripts/train.py \
|
130 |
-
--dataset_type=ffhq_aging \
|
131 |
-
--exp_dir=/path/to/experiment \
|
132 |
-
--workers=6 \
|
133 |
-
--batch_size=6 \
|
134 |
-
--test_batch_size=6 \
|
135 |
-
--test_workers=6 \
|
136 |
-
--val_interval=2500 \
|
137 |
-
--save_interval=10000 \
|
138 |
-
--start_from_encoded_w_plus \
|
139 |
-
--id_lambda=0.1 \
|
140 |
-
--lpips_lambda=0.1 \
|
141 |
-
--lpips_lambda_aging=0.1 \
|
142 |
-
--lpips_lambda_crop=0.6 \
|
143 |
-
--l2_lambda=0.25 \
|
144 |
-
--l2_lambda_aging=0.25 \
|
145 |
-
--l2_lambda_crop=1 \
|
146 |
-
--w_norm_lambda=0.005 \
|
147 |
-
--aging_lambda=5 \
|
148 |
-
--cycle_lambda=1 \
|
149 |
-
--input_nc=4 \
|
150 |
-
--target_age=uniform_random \
|
151 |
-
--use_weighted_id_loss
|
152 |
-
```
|
153 |
-
|
154 |
-
### Additional Notes
|
155 |
-
- See `options/train_options.py` for all training-specific flags.
|
156 |
-
- Note that using the flag `--start_from_encoded_w_plus` requires you to specify the path to the pretrained pSp encoder.
|
157 |
-
By default, this path is taken from `configs.paths_config.model_paths['pretrained_psp']`.
|
158 |
-
- If you wish to resume from a specific checkpoint (e.g. a pretrained SAM model), you may do so using `--checkpoint_path`.
|
159 |
-
|
160 |
-
|
161 |
-
## Notebooks
|
162 |
-
### Inference Notebook
|
163 |
-
To help visualize the results of SAM we provide a Jupyter notebook found in `notebooks/inference_playground.ipynb`.
|
164 |
-
The notebook will download the pretrained aging model and run inference on the images found in `notebooks/images`.
|
165 |
-
|
166 |
-
In addition, [Replicate](https://replicate.ai/) have created a demo for SAM where you can easily upload an image and run SAM on a desired set of ages! Check
|
167 |
-
out the demo [here](https://replicate.ai/yuval-alaluf/sam).
|
168 |
-
|
169 |
-
### MP4 Notebook
|
170 |
-
To show full lifespan results using SAM we provide an additional notebook `notebooks/animation_inference_playground.ipynb` that will
|
171 |
-
run aging on multiple ages between 0 and 100 and interpolate between the results to display full aging.
|
172 |
-
The results will be saved as an MP4 files in `notebooks/animations` showing the aging and de-aging results.
|
173 |
-
|
174 |
-
## Testing
|
175 |
-
### Inference
|
176 |
-
Having trained your model or if you're using a pretrained SAM model, you can use `scripts/inference.py` to run inference
|
177 |
-
on a set of images.
|
178 |
-
For example,
|
179 |
-
```
|
180 |
-
python scripts/inference.py \
|
181 |
-
--exp_dir=/path/to/experiment \
|
182 |
-
--checkpoint_path=experiment/checkpoints/best_model.pt \
|
183 |
-
--data_path=/path/to/test_data \
|
184 |
-
--test_batch_size=4 \
|
185 |
-
--test_workers=4 \
|
186 |
-
--couple_outputs
|
187 |
-
--target_age=0,10,20,30,40,50,60,70,80
|
188 |
-
```
|
189 |
-
Additional notes to consider:
|
190 |
-
- During inference, the options used during training are loaded from the saved checkpoint and are then updated using the
|
191 |
-
test options passed to the inference script.
|
192 |
-
- Adding the flag `--couple_outputs` will save an additional image containing the input and output images side-by-side in the sub-directory
|
193 |
-
`inference_coupled`. Otherwise, only the output image is saved to the sub-directory `inference_results`.
|
194 |
-
- In the above example, we will run age transformation with target ages 0,10,...,80.
|
195 |
-
- The results of each target age are saved to the sub-directories `inference_results/TARGET_AGE` and `inference_coupled/TARGET_AGE`.
|
196 |
-
- By default, the images will be saved at resolution of 1024x1024, the original output size of StyleGAN.
|
197 |
-
- If you wish to save outputs resized to resolutions of 256x256, you can do so by adding the flag `--resize_outputs`.
|
198 |
-
|
199 |
-
### Side-by-Side Inference
|
200 |
-
The above inference script will save each aging result in a different sub-directory for each target age. Sometimes,
|
201 |
-
however, it is more convenient to save all aging results of a given input side-by-side like the following:
|
202 |
-
|
203 |
-
<p align="center">
|
204 |
-
<img src="docs/866.jpg" width="800px"/>
|
205 |
-
</p>
|
206 |
-
|
207 |
-
To do so, we provide a script `inference_side_by_side.py` that works in a similar manner as the regular inference script:
|
208 |
-
```
|
209 |
-
python scripts/inference_side_by_side.py \
|
210 |
-
--exp_dir=/path/to/experiment \
|
211 |
-
--checkpoint_path=experiment/checkpoints/best_model.pt \
|
212 |
-
--data_path=/path/to/test_data \
|
213 |
-
--test_batch_size=4 \
|
214 |
-
--test_workers=4 \
|
215 |
-
--target_age=0,10,20,30,40,50,60,70,80
|
216 |
-
```
|
217 |
-
Here, all aging results 0,10,...,80 will be save side-by-side with the original input image.
|
218 |
-
|
219 |
-
### Reference-Guided Inference
|
220 |
-
In the paper, we demonstrated how one can perform style-mixing on the fine-level style inputs with a reference image
|
221 |
-
to control global features such as hair color. For example,
|
222 |
-
|
223 |
-
<p align="center">
|
224 |
-
<img src="docs/1005_style_mixing.jpg" width="800px"/>
|
225 |
-
</p>
|
226 |
-
|
227 |
-
To perform style mixing using reference images, we provide the script `reference_guided_inference.py`. Here,
|
228 |
-
we first perform aging using the specified target age(s). Then, style mixing is performed using the specified
|
229 |
-
reference images and the specified layers. For example, one can run:
|
230 |
-
```
|
231 |
-
python scripts/reference_guided_inference.py \
|
232 |
-
--exp_dir=/path/to/experiment \
|
233 |
-
--checkpoint_path=experiment/checkpoints/best_model.pt \
|
234 |
-
--data_path=/path/to/test_data \
|
235 |
-
--test_batch_size=4 \
|
236 |
-
--test_workers=4 \
|
237 |
-
--ref_images_paths_file=/path/to/ref_list.txt \
|
238 |
-
--latent_mask=8,9 \
|
239 |
-
--target_age=50,60,70,80
|
240 |
-
```
|
241 |
-
Here, the reference images should be specified in the file defined by `--ref_images_paths_file` and should have the
|
242 |
-
following format:
|
243 |
-
```
|
244 |
-
/path/to/reference/1.jpg
|
245 |
-
/path/to/reference/2.jpg
|
246 |
-
/path/to/reference/3.jpg
|
247 |
-
/path/to/reference/4.jpg
|
248 |
-
/path/to/reference/5.jpg
|
249 |
-
```
|
250 |
-
In the above example, we will aging using 4 different target ages. For each target age, we first transform the
|
251 |
-
test samples defined by `--data_path` and then perform style mixing on layers 8,9 defined by `--latent_mask`.
|
252 |
-
The results of each target age are saved in its own sub-directory.
|
253 |
-
|
254 |
-
### Style Mixing
|
255 |
-
Instead of performing style mixing using a reference image, you can perform style mixing using randomly generated
|
256 |
-
w latent vectors by running the script `style_mixing.py`. This script works in a similar manner to the reference
|
257 |
-
guided inference except you do not need to specify the `--ref_images_paths_file` flag.
|
258 |
-
|
259 |
-
## Repository structure
|
260 |
-
| Path | Description <img width=200>
|
261 |
-
| :--- | :---
|
262 |
-
| SAM | Repository root folder
|
263 |
-
| ├ configs | Folder containing configs defining model/data paths and data transforms
|
264 |
-
| ├ criteria | Folder containing various loss criterias for training
|
265 |
-
| ├ datasets | Folder with various dataset objects and augmentations
|
266 |
-
| ├ docs | Folder containing images displayed in the README
|
267 |
-
| ├ environment | Folder containing Anaconda environment used in our experiments
|
268 |
-
| ├ models | Folder containing all the models and training objects
|
269 |
-
| │ ├ encoders | Folder containing various architecture implementations
|
270 |
-
| │ ├ stylegan2 | StyleGAN2 model from [rosinality](https://github.com/rosinality/stylegan2-pytorch)
|
271 |
-
| │ ├ psp.py | Implementation of pSp encoder
|
272 |
-
| │ └ dex_vgg.py | Implementation of DEX VGG classifier used in computation of aging loss
|
273 |
-
| ├ notebook | Folder with jupyter notebook containing SAM inference playground
|
274 |
-
| ├ options | Folder with training and test command-line options
|
275 |
-
| ├ scripts | Folder with running scripts for training and inference
|
276 |
-
| ├ training | Folder with main training logic and Ranger implementation from [lessw2020](https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer)
|
277 |
-
| ├ utils | Folder with various utility functions
|
278 |
-
| <img width=300> | <img>
|
279 |
-
|
280 |
-
|
281 |
-
## Credits
|
282 |
-
**StyleGAN2 model and implementation:**
|
283 |
-
https://github.com/rosinality/stylegan2-pytorch
|
284 |
-
Copyright (c) 2019 Kim Seonghyeon
|
285 |
-
License (MIT) https://github.com/rosinality/stylegan2-pytorch/blob/master/LICENSE
|
286 |
-
|
287 |
-
**IR-SE50 model and implementations:**
|
288 |
-
https://github.com/TreB1eN/InsightFace_Pytorch
|
289 |
-
Copyright (c) 2018 TreB1eN
|
290 |
-
License (MIT) https://github.com/TreB1eN/InsightFace_Pytorch/blob/master/LICENSE
|
291 |
-
|
292 |
-
**Ranger optimizer implementation:**
|
293 |
-
https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer
|
294 |
-
License (Apache License 2.0) https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer/blob/master/LICENSE
|
295 |
-
|
296 |
-
**LPIPS model and implementation:**
|
297 |
-
https://github.com/S-aiueo32/lpips-pytorch
|
298 |
-
Copyright (c) 2020, Sou Uchida
|
299 |
-
License (BSD 2-Clause) https://github.com/S-aiueo32/lpips-pytorch/blob/master/LICENSE
|
300 |
-
|
301 |
-
**DEX VGG model and implementation:**
|
302 |
-
https://github.com/InterDigitalInc/HRFAE
|
303 |
-
Copyright (c) 2020, InterDigital R&D France
|
304 |
-
https://github.com/InterDigitalInc/HRFAE/blob/master/LICENSE.txt
|
305 |
-
|
306 |
-
**pSp model and implementation:**
|
307 |
-
https://github.com/eladrich/pixel2style2pixel
|
308 |
-
Copyright (c) 2020 Elad Richardson, Yuval Alaluf
|
309 |
-
https://github.com/eladrich/pixel2style2pixel/blob/master/LICENSE
|
310 |
-
|
311 |
-
## Acknowledgments
|
312 |
-
This code borrows heavily from [pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel)
|
313 |
-
|
314 |
-
## Citation
|
315 |
-
If you use this code for your research, please cite our paper <a href="https://arxiv.org/abs/2102.02754">Only a Matter of Style: Age Transformation Using a Style-Based Regression Model</a>:
|
316 |
-
|
317 |
-
```
|
318 |
-
@article{alaluf2021matter,
|
319 |
-
author = {Alaluf, Yuval and Patashnik, Or and Cohen-Or, Daniel},
|
320 |
-
title = {Only a Matter of Style: Age Transformation Using a Style-Based Regression Model},
|
321 |
-
journal = {ACM Trans. Graph.},
|
322 |
-
issue_date = {August 2021},
|
323 |
-
volume = {40},
|
324 |
-
number = {4},
|
325 |
-
year = {2021},
|
326 |
-
articleno = {45},
|
327 |
-
publisher = {Association for Computing Machinery},
|
328 |
-
url = {https://doi.org/10.1145/3450626.3459805}
|
329 |
-
}
|
330 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|