Update README.md
Browse files
README.md
CHANGED
@@ -1,178 +1,10 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
***In the event of violation of the legal and ethical requirements of the user's country or region, this code repository is exempt from liability.***
|
13 |
-
|
14 |
-
## Preparation
|
15 |
-
### Installation
|
16 |
-
```
|
17 |
-
# clone project
|
18 |
-
git clone https://github.com/mike9251/simswap-inference-pytorch
|
19 |
-
cd simswap-inference-pytorch
|
20 |
-
|
21 |
-
# [OPTIONAL] create conda environment
|
22 |
-
conda create -n myenv python=3.9
|
23 |
-
conda activate myenv
|
24 |
-
|
25 |
-
# install pytorch and torchvision according to instructions
|
26 |
-
# https://pytorch.org/get-started/
|
27 |
-
|
28 |
-
# install requirements
|
29 |
-
pip install -r requirements.txt
|
30 |
-
```
|
31 |
-
|
32 |
-
### Important
|
33 |
-
Face detection will be performed on CPU. To run it on GPU you need to install onnx gpu runtime:
|
34 |
-
|
35 |
-
```pip install onnxruntime-gpu==1.11.1```
|
36 |
-
|
37 |
-
and modify one line of code in ```...Anaconda3\envs\myenv\Lib\site-packages\insightface\model_zoo\model_zoo.py```
|
38 |
-
|
39 |
-
Here, instead of passing **None** as the second argument to the onnx inference session
|
40 |
-
```angular2html
|
41 |
-
class ModelRouter:
|
42 |
-
def __init__(self, onnx_file):
|
43 |
-
self.onnx_file = onnx_file
|
44 |
-
|
45 |
-
def get_model(self):
|
46 |
-
session = onnxruntime.InferenceSession(self.onnx_file, None)
|
47 |
-
input_cfg = session.get_inputs()[0]
|
48 |
-
```
|
49 |
-
pass a list of providers
|
50 |
-
```angular2html
|
51 |
-
class ModelRouter:
|
52 |
-
def __init__(self, onnx_file):
|
53 |
-
self.onnx_file = onnx_file
|
54 |
-
|
55 |
-
def get_model(self):
|
56 |
-
session = onnxruntime.InferenceSession(self.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
|
57 |
-
input_cfg = session.get_inputs()[0]
|
58 |
-
```
|
59 |
-
Otherwise simply use CPU onnx runtime with only a minor performance drop.
|
60 |
-
|
61 |
-
### Weights
|
62 |
-
#### Weights for all models get downloaded automatically.
|
63 |
-
|
64 |
-
You can also download weights manually and put inside `weights` folder:
|
65 |
-
|
66 |
-
- weights/<a href="https://github.com/mike9251/simswap-inference-pytorch/releases/download/weights/face_detector_scrfd_10g_bnkps.onnx">face_detector_scrfd_10g_bnkps.onnx</a>
|
67 |
-
- weights/<a href="https://github.com/mike9251/simswap-inference-pytorch/releases/download/weights/arcface_net.jit">arcface_net.jit</a>
|
68 |
-
- weights/<a href="https://github.com/mike9251/simswap-inference-pytorch/releases/download/weights/parsing_model_79999_iter.pth">79999_iter.pth</a>
|
69 |
-
- weights/<a href="https://github.com/mike9251/simswap-inference-pytorch/releases/download/weights/simswap_224_latest_net_G.pth">simswap_224_latest_net_G.pth</a> - official 224x224 model
|
70 |
-
- weights/<a href="https://github.com/mike9251/simswap-inference-pytorch/releases/download/weights/simswap_512_390000_net_G.pth">simswap_512_390000_net_G.pth</a> - unofficial 512x512 model (I took it <a href="https://github.com/neuralchen/SimSwap/issues/255">here</a>).
|
71 |
-
- weights/<a href="https://github.com/mike9251/simswap-inference-pytorch/releases/download/v1.1/GFPGANv1.4_ema.pth">GFPGANv1.4_ema.pth</a>
|
72 |
-
- weights/<a href="https://github.com/mike9251/simswap-inference-pytorch/releases/download/v1.2/blend_module.jit">blend_module.jit</a>
|
73 |
-
|
74 |
-
## Inference
|
75 |
-
### Web App
|
76 |
-
```angular2html
|
77 |
-
streamlit run app_web.py
|
78 |
-
```
|
79 |
-
|
80 |
-
### Command line App
|
81 |
-
This repository supports inference in several modes, which can be easily configured with config files in the **configs** folder.
|
82 |
-
- **replace all faces on a target image / folder with images**
|
83 |
-
```angular2html
|
84 |
-
python app.py --config-name=run_image.yaml
|
85 |
-
```
|
86 |
-
|
87 |
-
- **replace all faces on a video**
|
88 |
-
```angular2html
|
89 |
-
python app.py --config-name=run_video.yaml
|
90 |
-
```
|
91 |
-
|
92 |
-
- **replace a specific face on a target image / folder with images**
|
93 |
-
```angular2html
|
94 |
-
python app.py --config-name=run_image_specific.yaml
|
95 |
-
```
|
96 |
-
|
97 |
-
- **replace a specific face on a video**
|
98 |
-
```angular2html
|
99 |
-
python app.py --config-name=run_video_specific.yaml
|
100 |
-
```
|
101 |
-
|
102 |
-
Config files contain two main parts:
|
103 |
-
|
104 |
-
- **data**
|
105 |
-
- *id_image* - source image, identity of this person will be transferred.
|
106 |
-
- *att_image* - target image, attributes of the person on this image will be mixed with the person's identity from the source image. Here you can also specify a folder with multiple images - identity translation will be applied to all images in the folder.
|
107 |
-
- *specific_id_image* - a specific person on the *att_image* you would like to replace, leaving others untouched (if there's any other person).
|
108 |
-
- *att_video* - the same as *att_image*
|
109 |
-
- *clean_work_dir* - whether remove temp folder with images or not (for video configs only).
|
110 |
-
|
111 |
-
|
112 |
-
- **pipeline**
|
113 |
-
- *face_detector_weights* - path to the weights file OR an empty string ("") for automatic weights downloading.
|
114 |
-
- *face_id_weights* - path to the weights file OR an empty string ("") for automatic weights downloading.
|
115 |
-
- *parsing_model_weights* - path to the weights file OR an empty string ("") for automatic weights downloading.
|
116 |
-
- *simswap_weights* - path to the weights file OR an empty string ("") for automatic weights downloading.
|
117 |
-
- *gfpgan_weights* - path to the weights file OR an empty string ("") for automatic weights downloading.
|
118 |
-
- *device* - whether you want to run the application using GPU or CPU.
|
119 |
-
- *crop_size* - size of images SimSwap models works with.
|
120 |
-
- *checkpoint_type* - the official model works with 224x224 crops and has different pre/post processings (imagenet like). Latest official repository allows you to train your own models, but the architecture and pre/post processings are slightly different (1. removed Tanh from the last layer; 2. normalization to [0...1] range). **If you run the official 224x224 model then set this parameter to "official_224", otherwise "none".**
|
121 |
-
- *face_alignment_type* - affects reference face key points coordinates. **Possible values are "ffhq" and "none". Try both of them to see which one works better for your data.**
|
122 |
-
- *smooth_mask_kernel_size* - a non-zero value. It's used for the post-processing mask size attenuation. You might want to play with this parameter.
|
123 |
-
- *smooth_mask_iter* - a non-zero value. The number of times a face mask is smoothed.
|
124 |
-
- *smooth_mask_threshold* - controls the face mask saturation. Valid values are in range [0.0...1.0]. Tune this parameter if there are artifacts around swapped faces.
|
125 |
-
- *face_detector_threshold* - values in range [0.0...1.0]. Higher value reduces probability of FP detections but increases the probability of FN.
|
126 |
-
- *specific_latent_match_threshold* - values in range [0.0...inf]. Usually takes small values around 0.05.
|
127 |
-
- *enhance_output* - whether to apply GFPGAN model or not as a post-processing step.
|
128 |
-
|
129 |
-
|
130 |
-
### Overriding parameters with CMD
|
131 |
-
Every parameter in a config file can be overridden by specifying it directly with CMD. For example:
|
132 |
-
|
133 |
-
```angular2html
|
134 |
-
python app.py --config-name=run_image.yaml data.specific_id_image="path/to/the/image" pipeline.erosion_kernel_size=20
|
135 |
-
```
|
136 |
-
|
137 |
-
## Video
|
138 |
-
|
139 |
-
<details>
|
140 |
-
<summary><b>Official 224x224 model, face alignment "none"</b></summary>
|
141 |
-
|
142 |
-
[](https://vimeo.com/728346715)
|
143 |
-
|
144 |
-
</details>
|
145 |
-
|
146 |
-
<details>
|
147 |
-
<summary><b>Official 224x224 model, face alignment "ffhq"</b></summary>
|
148 |
-
|
149 |
-
[](https://vimeo.com/728348520)
|
150 |
-
|
151 |
-
</details>
|
152 |
-
|
153 |
-
<details>
|
154 |
-
<summary><b>Unofficial 512x512 model, face alignment "none"</b></summary>
|
155 |
-
|
156 |
-
[](https://vimeo.com/728346542)
|
157 |
-
|
158 |
-
</details>
|
159 |
-
|
160 |
-
<details>
|
161 |
-
<summary><b>Unofficial 512x512 model, face alignment "ffhq"</b></summary>
|
162 |
-
|
163 |
-
[](https://vimeo.com/728349219)
|
164 |
-
|
165 |
-
</details>
|
166 |
-
|
167 |
-
## License
|
168 |
-
For academic and non-commercial use only.The whole project is under the CC-BY-NC 4.0 license. See [LICENSE](https://github.com/neuralchen/SimSwap/blob/main/LICENSE) for additional details.
|
169 |
-
|
170 |
-
## Acknowledgements
|
171 |
-
|
172 |
-
<!--ts-->
|
173 |
-
* [SimSwap](https://github.com/neuralchen/SimSwap)
|
174 |
-
* [Insightface](https://github.com/deepinsight/insightface)
|
175 |
-
* [Face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch)
|
176 |
-
* [BiSeNet](https://github.com/CoinCheung/BiSeNet)
|
177 |
-
* [GFPGAN](https://github.com/TencentARC/GFPGAN)
|
178 |
-
<!--te-->
|
|
|
1 |
+
---
|
2 |
+
title: My Awesome Model
|
3 |
+
emoji: 🚀
|
4 |
+
colorFrom: blue
|
5 |
+
colorTo: indigo
|
6 |
+
sdk: streamlit
|
7 |
+
sdk_version: 1.0.0
|
8 |
+
app_file: my_app.py
|
9 |
+
pinned: true
|
10 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|