Image Feature Extraction
Py-Feat
English
ljchang commited on
Commit
3f7bcb8
·
verified ·
1 Parent(s): beedc17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -3
README.md CHANGED
@@ -1,3 +1,74 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ ---
6
+
7
+
8
+ # MP_Blendshapes
9
+
10
+ ## Model Description
11
+ MP_Blendshapes has been ported to pytorch from Google's mediapipe library Liam Schoneveld's github [repository](https://github.com/nlml/deconstruct-mediapipe). The inputs are 146/473 of mediapipe's FaceMeshV2 high density landmark model. The 52 blendshapes are similar to ARKit and loosely correspond to Facial Action Unit Coding Systen.
12
+
13
+ See the mediapipe [model card](https://storage.googleapis.com/mediapipe-assets/Model%20Card%20Blendshape%20V2.pdf) for more details.
14
+
15
+ ## Model Details
16
+ - **Model Type**: MLP-Mixer (Keras)
17
+ - **Framework**: pytorch
18
+
19
+ ## Model Sources
20
+ - **Repository**: [GitHub Repository](https://github.com/cosanlab/py-feat)
21
+ - **Paper**: [Mediapipe blendshape model card](https://storage.googleapis.com/mediapipe-assets/Model%20Card%20Blendshape%20V2.pdf)
22
+
23
+ ## Citation
24
+ If you use the mp_blendshapes model in your research or application, please cite the following paper:
25
+
26
+ Grishchenko, I., Ablavatski, A., Kartynnik, Y., Raveendran, K., & Grundmann, M. (2020). Attention mesh: High-fidelity face mesh prediction in real-time. arXiv preprint arXiv:2006.10962.
27
+
28
+ ```
29
+ @article{grishchenko2020attention,
30
+ title={Attention mesh: High-fidelity face mesh prediction in real-time},
31
+ author={Grishchenko, Ivan and Ablavatski, Artsiom and Kartynnik, Yury and Raveendran, Karthik and Grundmann, Matthias},
32
+ journal={arXiv preprint arXiv:2006.10962},
33
+ year={2020}
34
+ }
35
+ ```
36
+
37
+ ## Example Useage
38
+
39
+ ```python
40
+ import torch
41
+ import pandas as pd
42
+ from huggingface_hub import hf_hub_download
43
+ from feat.au_detectors.MP_Blendshapes.MP_Blendshapes_test import MediaPipeBlendshapesMLPMixer
44
+ from feat.utils import MP_BLENDSHAPE_MODEL_LANDMARKS_SUBSET, MP_BLENDSHAPE_NAMES
45
+
46
+ device = 'cpu'
47
+
48
+ # Load model and weights
49
+ blendshape_detector = MediaPipeBlendshapesMLPMixer()
50
+ model_path = hf_hub_download(repo_id="py-feat/mp_blendshapes", filename="face_blendshapes.pth")
51
+ blendshape_model_file = hf_hub_download(repo_id='py-feat/resmasknet', filename="ResMaskNet_Z_resmasking_dropout1_rot30.pth")
52
+ blendshape_checkpoint = torch.load(blendshape_model_file, map_location=device)["net"]
53
+ blendshape_detector.load_state_dict(blendshape_checkpoint)
54
+ blendshape_detector.eval()
55
+ blendshape_detector.to(device)
56
+
57
+
58
+ # Test model
59
+ face_image = "path/to/your/test_image.jpg" # Replace with your extracted face image that is [224, 224]
60
+
61
+ # Extract Landmarks
62
+ landmark_detector = torch.load('/Users/lukechang/Dropbox/py-feat/mediapipe/model/face_landmarks_detector_Nx3x256x256_onnx.pth', weights_only=False)
63
+ landmark_detector.eval()
64
+ landmark_detector.to(device)
65
+ landmark_results = landmark_detector(torch.tensor(face_image).to(device))
66
+
67
+ # Blendshape Classification
68
+ landmarks = landmark_results[0].reshape(1,478,3)[:,:,:2]
69
+ img_size = torch.tensor((face_image_width, face_image_height)).unsqueeze(0).unsqueeze(0)
70
+ landmarks = landmarks * img_size
71
+ blendshapes = blendshape_detector(landmarks)
72
+ blendshape_results = pd.Series(blendshape_results.squeeze().detach().numpy(), index=BLENDSHAPE_NAMES)
73
+
74
+ ```