|
--- |
|
license: cc-by-4.0 |
|
pipeline_tag: image-to-image |
|
tags: |
|
- pytorch |
|
- super-resolution |
|
--- |
|
|
|
[Link to Github Release](https://github.com/Phhofm/models/releases/tag/2xLexicaRRDBNet) |
|
|
|
# 2xLexicaRRDBNet_Sharp |
|
|
|
Name: 2xLexicaRRDBNet_Sharp |
|
Author: Philip Hofmann |
|
Release Date: 01.06.2023 |
|
License: CC BY 4.0 |
|
Network: RRDBNet |
|
Scale: 2 |
|
Purpose: Upscaling AI generated images - a bit sharper then above model |
|
Iterations: 220'000 |
|
batch_size: 4 |
|
HR_size: 128 |
|
Epoch: 18 (require iter number per epoch: 10964) |
|
Dataset: lexica-aperture-v3-small |
|
Number of train images: 43856 |
|
OTF Training: No |
|
Pretrained_Model_G: None |
|
|
|
Description: Its like the above model, but trained for some more with l1_gt_usm and percep_gt_usm set to true, resulting in sharper outputs. I provide both so they can be chosen based on preferrence of the user. |
|
|
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
 |
|
|