--- license: cc-by-4.0 pipeline_tag: image-to-image tags: - pytorch - super-resolution --- [Link to Github Release](https://github.com/Phhofm/models/releases/tag/4xRealWebPhoto_v4_drct-l) ## 4xRealWebPhoto_v4_drct-l **Scale:** 4 **Architecture:** [DRCT](https://github.com/ming053l/DRCT) **Architecture Option:** [DRCT-L](https://github.com/ming053l/DRCT/blob/8c13e63135f494913cd504c073e41ef52250d1d4/options/train/train_DRCT-L_SRx4_finetune_from_ImageNet_pretrain.yml#L56) **Author:** Philip Hofmann **License:** CC-BY-0.4 **Purpose:** Restoration **Subject:** Realistic, Photography **Input Type:** Images **Release Date:** 02.05.2024 **Dataset:** 4xRealWebPhoto_v4 **Dataset Size:** 8492 **OTF (on the fly augmentations):** No **Pretrained Model:** 4xmssim_drct-l_pretrain **Iterations:** 260'000 **Batch Size:** 6,4 **GT Size:** 128,192 **Description:** The first real-world drct model, so I am releasing it, or at least my try at it, maybe others will be able to get better results than me, I think I'd recommend my [4xRealWebPhoto_v3_atd](https://github.com/Phhofm/models/releases/tag/4xRealWebPhoto_v3_atd) model over this one if a real-world model for upscaling photos downloaded from the web is desired. This model is based on my previously released drct pretrain. Used mixup, cutmix, resizemix augmentations, and mssim, perceptual, gan, dists, ldl, focalfrequency, gradvar, color and luma losses. **Showcase:** [Slow.pics](https://slow.pics/s/VOKVChT9) ![Example 1](https://github.com/Phhofm/models/assets/14755670/03960e32-93a6-4f37-9ecc-8d3f6244836e) ![Example 2](https://github.com/Phhofm/models/assets/14755670/58ffccdb-bd1c-4d25-81d1-c849b91356c5) ![Example 3](https://github.com/Phhofm/models/assets/14755670/35b3f3d8-3b8c-4005-b545-e332e4c5d3c3) ![Example 4](https://github.com/Phhofm/models/assets/14755670/17e7ac07-608a-46bf-b115-3896b7b3c666) ![Example 5](https://github.com/Phhofm/models/assets/14755670/9f81b422-33aa-4b69-91bf-85b591fbf8a9) ![Example 6](https://github.com/Phhofm/models/assets/14755670/5274703f-cd23-4b4f-93f1-5edc166e355c) ![Example 7](https://github.com/Phhofm/models/assets/14755670/52d1f8d0-4504-4bdc-8e4a-ca2849efb12b) ![Example 8](https://github.com/Phhofm/models/assets/14755670/491bf5fe-71f9-40d0-a77f-eaaa36f47b33) ![Example 9](https://github.com/Phhofm/models/assets/14755670/7a9cec4b-1b6c-4a1c-94b0-1b586f6c3fea)