--- language: en tags: - Computer Vision - Machine Learning - Deep Learning --- # FExGAN-Meta: Facial Expression Generation with Meta-Humans ![FExGAN-Meta GIF Demo](https://github.com/azadlab/FExGAN-Meta/blob/master/FExGAN-Meta.gif?raw=true) This is the demo of the FExGAN-Meta proposed in the following article: [FExGAN-Meta: Facial Expression Generation with Meta-Humans](https://www.arxiv.com) FExGAN-Meta is the extension of [FExGAN](http://arxiv.org/abs/2201.09061). It takes input an image of Meta-Human and a vector of desired affect (e.g. angry,disgust,sad,surprise,joy,neutral and fear) and converts the input image to the desired emotion while keeping the identity of the original image. ![FExGAN-Meta GIF Demo](https://github.com/azadlab/FExGAN-Meta/blob/master/results.png?raw=true) # Requirements In order to run this you need following: * Python >= 3.7 * Tensorflow >= 2.6 * CUDA enabled GPU with memory >=8GB (e.g. GTX1070/GTX1080) # Usage Code https://www.github.com/azadlab/FExGAN-Meta # Citation If you use any part of this code or use ideas mentioned in the paper, please cite the following article. ``` @article{Siddiqui_FExGAN-Meta_2022, author = {{Siddiqui}, J. Rafid}, title = {{FExGAN-Meta: Facial Expression Generation with Meta-Humans}}, journal = {ArXiv e-prints}, archivePrefix = "arXiv", keywords = {Deep Learning, GAN, Facial Expressions}, year = {2022} url = {http://arxiv.org/abs/2201.09061}, } ```