MARLIN: Masked Autoencoder for facial video Representation LearnINg
Abstract
This paper proposes a <PRE_TAG>self-supervised approach</POST_TAG> to learn universal facial representations from videos, that can transfer across a variety of facial analysis tasks such as <PRE_TAG>Facial Attribute Recognition (FAR)</POST_TAG>, Facial Expression Recognition (FER), <PRE_TAG>DeepFake Detection (DFD)</POST_TAG>, and <PRE_TAG>Lip Synchronization (LS)</POST_TAG>. Our proposed framework, named <PRE_TAG>MARLIN</POST_TAG>, is a <PRE_TAG>facial video masked autoencoder</POST_TAG>, that learns highly robust and generic facial embeddings from abundantly available non-annotated web crawled facial videos. As a challenging auxiliary task, <PRE_TAG>MARLIN</POST_TAG> reconstructs the spatio-temporal details of the face from the densely masked facial regions which mainly include eyes, nose, mouth, lips, and skin to capture local and global aspects that in turn help in encoding generic and transferable features. Through a variety of experiments on diverse downstream tasks, we demonstrate <PRE_TAG>MARLIN</POST_TAG> to be an excellent facial video encoder as well as feature extractor, that performs consistently well across a variety of downstream tasks including FAR (1.13% gain over supervised benchmark), FER (2.64% gain over unsupervised benchmark), DFD (1.86% gain over unsupervised benchmark), LS (29.36% gain for Frechet Inception Distance), and even in low data regime. Our code and models are available at https://github.com/ControlNet/<PRE_TAG>MARLIN</POST_TAG> .
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper