This model is the MaskGIT tokenizer with a vocabulary size of 10bits adopted for the usage in the MaskBit codebase. It uses a downsampling factor of 16 and is trained on ImageNet for images of resolution 256.
You can find more details in the original repository and in the paper. All credits for this model belong to Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T. Freeman.
Dataset used to train markweber/maskgit_tokenizer
Evaluation results
- rFID on ILSVRC/imagenet-1kself-reported1.960
- InceptionScore on ILSVRC/imagenet-1kself-reported178.300
- LPIPS on ILSVRC/imagenet-1kself-reported0.331
- PSNR on ILSVRC/imagenet-1kself-reported18.600
- SSIM on ILSVRC/imagenet-1kself-reported0.470
- CodebookUsage on ILSVRC/imagenet-1kself-reported1.000