Attention, Please! Revisiting Attentive Probing for Masked Image Modeling Paper • 2506.10178 • Published Jun 11 • 8
Boosting Generative Image Modeling via Joint Image-Feature Synthesis Paper • 2504.16064 • Published Apr 22 • 14
Boosting Generative Image Modeling via Joint Image-Feature Synthesis Paper • 2504.16064 • Published Apr 22 • 14
Advancing Semantic Future Prediction through Multimodal Visual Sequence Transformers Paper • 2501.08303 • Published Jan 14
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling Paper • 2502.09509 • Published Feb 13 • 8
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling Paper • 2502.09509 • Published Feb 13 • 8
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling Paper • 2502.09509 • Published Feb 13 • 8 • 2
Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit? Paper • 2309.06891 • Published Sep 13, 2023 • 2
What to Hide from Your Students: Attention-Guided Masked Image Modeling Paper • 2203.12719 • Published Mar 23, 2022
Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing Paper • 2104.01375 • Published Apr 3, 2021
SPOT: Self-Training with Patch-Order Permutation for Object-Centric Learning with Autoregressive Transformers Paper • 2312.00648 • Published Dec 1, 2023