codeformer-api / docs /train.md
PeiqingYang's picture
update training documents.
68fdbb1
|
raw
history blame
2.03 kB

:milky_way: Training Procedures

English | ็ฎ€ไฝ“ไธญๆ–‡

Preparing Dataset

  • Download training dataset: FFHQ

Training

For PyTorch versions >= 1.10, please replace `python -m torch.distributed.launch` in the commands below with `torchrun`.

๐Ÿ‘พ Stage I - VQGAN

  • Training VQGAN:

    python -m torch.distributed.launch --nproc_per_node=gpu_num --master_port=4321 basicsr/train.py -opt options/VQGAN_512_ds32_nearest_stage1.yml --launcher pytorch

  • After VQGAN training, you can pre-calculate code sequence for the training dataset to speed up the later training stages:

    python scripts/generate_latent_gt.py

  • If you don't require training your own VQGAN, you can find pre-trained VQGAN (vqgan_code1024.pth) and the corresponding code sequence (latent_gt_code1024.pth) in the folder of Releases v0.1.0: https://github.com/sczhou/CodeFormer/releases/tag/v0.1.0

๐Ÿš€ Stage II - CodeFormer (w=0)

  • Training Code Sequence Prediction Module:

    python -m torch.distributed.launch --nproc_per_node=gpu_num --master_port=4322 basicsr/train.py -opt options/CodeFormer_stage2.yml --launcher pytorch

  • Pre-trained CodeFormer of stage II (codeformer_stage2.pth) can be found in the folder of Releases v0.1.0: https://github.com/sczhou/CodeFormer/releases/tag/v0.1.0

๐Ÿ›ธ Stage III - CodeFormer (w=1)

  • Training Controllable Module:

    python -m torch.distributed.launch --nproc_per_node=gpu_num --master_port=4323 basicsr/train.py -opt options/CodeFormer_stage3.yml --launcher pytorch

  • Pre-trained CodeFormer (codeformer.pth) can be found in the folder of Releases v0.1.0: https://github.com/sczhou/CodeFormer/releases/tag/v0.1.0


:whale: The project was built using the framework BasicSR. For detailed information on training, resuming, and other related topics, please refer to the documentation: https://github.com/XPixelGroup/BasicSR/blob/master/docs/TrainTest.md