ariakang commited on
Commit
79f8076
·
1 Parent(s): ea4bd03

add readme

Browse files
Files changed (1) hide show
  1. README.md +120 -0
README.md CHANGED
@@ -1,3 +1,123 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # EgoBlur
5
+ ![video](https://video-sjc3-1.xx.fbcdn.net/o1/v/t2/f2/m69/AQPbmALjENAtAB1RHFnpKxX5CA7OZ9MOyOiCmipt2Rco6369KRQ5Lz3xId-FuNne0MYU2JLb4y1VLQ0EBV-nXUd9.mp4?efg=eyJ2ZW5jb2RlX3RhZyI6Im9lcF9oZCJ9&_nc_ht=video-sjc3-1.xx.fbcdn.net&_nc_cat=107&strext=1&vs=1c68365c099d8976&_nc_vs=HBkcFQIYOnBhc3N0aHJvdWdoX2V2ZXJzdG9yZS9HR21HT0J1M1ExX2lLd1VIQUtMWEZmcG42Q2hlYm1kakFBQUYVAALIAQBLB4gScHJvZ3Jlc3NpdmVfcmVjaXBlATENc3Vic2FtcGxlX2ZwcwAQdm1hZl9lbmFibGVfbnN1YgAgbWVhc3VyZV9vcmlnaW5hbF9yZXNvbHV0aW9uX3NzaW0AKGNvbXB1dGVfc3NpbV9vbmx5X2F0X29yaWdpbmFsX3Jlc29sdXRpb24AHXVzZV9sYW5jem9zX2Zvcl92cW1fdXBzY2FsaW5nABFkaXNhYmxlX3Bvc3RfcHZxcwAVACUAHIwXQAAAAAAAAAAREQAAACai_vaUjP2fARUCKAJDMxgLdnRzX3ByZXZpZXccF0A5Sn752yLRGBlkYXNoX2gyNjQtYmFzaWMtZ2VuMl83MjBwEgAYGHZpZGVvcy52dHMuY2FsbGJhY2sucHJvZDgSVklERU9fVklFV19SRVFVRVNUGwqIFW9lbV90YXJnZXRfZW5jb2RlX3RhZwZvZXBfaGQTb2VtX3JlcXVlc3RfdGltZV9tcwEwDG9lbV9jZmdfcnVsZQd1bm11dGVkE29lbV9yb2lfcmVhY2hfY291bnQBMBFvZW1faXNfZXhwZXJpbWVudAAMb2VtX3ZpZGVvX2lkDzIxOTYzNzUzNDA5MjA0MRJvZW1fdmlkZW9fYXNzZXRfaWQQMTA1Mjc2MDUwOTQyNTY4NhVvZW1fdmlkZW9fcmVzb3VyY2VfaWQPMzUxNzkzODEzODM5NzYxHG9lbV9zb3VyY2VfdmlkZW9fZW5jb2RpbmdfaWQPNDc0NTg3ODkyMDQyMjUzDnZ0c19yZXF1ZXN0X2lkACUCHAAlvgEbB4gBcwQzNTE4AmNkCjIwMjMtMDgtMzADcmNiATADYXBwBVZpZGVvAmN0EUNNU19NRURJQV9NQU5BR0VSE29yaWdpbmFsX2R1cmF0aW9uX3MJMjUuMjkxOTMzAnRzFXByb2dyZXNzaXZlX2VuY29kaW5ncwA&ccb=9-4&oh=00_AYBqUXQt3jxOc3aVQMGUUVXoKK5DUzqwXgpq0O9c0NRegA&oe=66EBBAA2&_nc_sid=1d576d&_nc_rid=317103633779380&_nc_store_type=1)
6
+
7
+ ## Introduction
8
+ Introducing EgoBlur, a state-of-the-art obfuscation system developed by Meta as part of their commitment to responsible innovation. EgoBlur is designed to protect privacy while accelerating AI and ML research, and is now available under an open-source license (Apache 2.0) for both commercial and non-commercial use.
9
+ EgoBlur was first developed for Meta's Project Aria program, and has been used internally since 2020. The system uses advanced face and license plate obfuscation techniques to help maintain privacy while still allowing researchers to access valuable data.
10
+
11
+
12
+ With EgoBlur, researchers can now work on AI and ML projects without compromising the privacy of those around them. This is especially important in applications such as computer vision, where large amounts of data are often required to train models effectively.
13
+ We believe that EgoBlur will be a valuable tool for the external research community, and we are excited to make it available under an open-source license. By using EgoBlur, researchers can focus on advancing AI and ML technology while also maintaining the trust of the public.
14
+
15
+ ## Demo
16
+
17
+ This repository contains demo of [EgoBlur models](https://www.projectaria.com/tools/egoblur) with visualizations.
18
+
19
+ ## Installation
20
+
21
+ This code requires `conda>=23.1.0` to install dependencies and create a virtual environment to execute the code in. Please follow the instructions [here](https://docs.anaconda.com/free/anaconda/install/index.html) to install Anaconda for your machine.
22
+ We list our dependencies in `environment.yaml` file. To install the dependencies and create the env run:
23
+
24
+ ```
25
+ conda env create --file=environment.yaml
26
+
27
+ # After installation, check pytorch.
28
+ conda activate ego_blur
29
+ python
30
+ >>> import torch
31
+ >>> torch.__version__
32
+ '1.12.1'
33
+ >>> torch.cuda.is_available()
34
+ True
35
+ ```
36
+
37
+ Please note that this code can run on both CPU and GPU but installing both PyTorch and TorchVision with CUDA support is strongly recommended.
38
+
39
+ ## Getting Started
40
+
41
+ First download the zipped models from given links. Then the models can be used as input/s to CLI.
42
+
43
+ | Model | Download link |
44
+ | ----- | ----- |
45
+ | ego\_blur\_face | [ego\_blur\_website](https://www.projectaria.com/tools/egoblur) |
46
+ | ego\_blur\_lp | [ego\_blur\_website](https://www.projectaria.com/tools/egoblur) |
47
+
48
+ ### CLI options
49
+
50
+ A brief description of CLI args:
51
+ `--face_model_path` use this argument to provide absolute EgoBlur face model file path. You MUST provide either `--face_model_path` or `--lp_model_path` or both. If none is provided code will throw a `ValueError`.
52
+ `--face_model_score_threshold` use this argument to provide face model score threshold to filter out low confidence face detections. The values must be between 0.0 and 1.0, if not provided this defaults to 0.1.
53
+ `--lp_model_path` use this argument to provide absolute EgoBlur license plate file path. You MUST provide either `--face_model_path` or `--lp_model_path` or both. If none is provided code will throw a `ValueError`.
54
+ `--lp_model_score_threshold` use this argument to provide license plate model score threshold to filter out low confidence license plate detections. The values must be between 0.0 and 1.0, if not provided this defaults to 0.1.
55
+ `--nms_iou_threshold` use this argument to provide NMS iou threshold to filter out low confidence overlapping boxes. The values must be between 0.0 and 1.0, if not provided this defaults to 0.3.
56
+ `--scale_factor_detections` use this argument to provide scale detections by the given factor to allow blurring more area. The values can only be positive real numbers eg: 0.9(values \< 1\) would mean scaling DOWN the predicted blurred region by 10%, whereas as 1.1(values \> 1\) would mean scaling UP the predicted blurred region by 10%.
57
+ `--input_image_path` use this argument to provide absolute path for the given image on which we want to make detections and perform blurring. You MUST provide either `--input_image_path` or `--input_video_path` or both. If none is provided code will throw a `ValueError`.
58
+ `--output_image_path` use this argument to provide absolute path where we want to store the blurred image. You MUST provide `--output_image_path` with `--input_image_path` otherwise code will throw `ValueError`.
59
+ `--input_video_path` use this argument to provide absolute path for the given video on which we want to make detections and perform blurring. You MUST provide either `--input_image_path` or `--input_video_path` or both. If none is provided code will throw a `ValueError`.
60
+ `--output_video_path` use this argument to provide absolute path where we want to store the blurred video. You MUST provide `--output_video_path` with `--input_video_path` otherwise code will throw `ValueError`.
61
+ `--output_video_fps` use this argument to provide FPS for the output video. The values must be positive integers, if not provided this defaults to 30\.
62
+
63
+ ### CLI command example
64
+
65
+ Download the git repo locally and run following commands. Please note that these commands assumes that you have a created a folder `/home/${USER}/ego_blur_assets/` where you have extracted the zipped models and have test image in the form of `test_image.jpg` and a test video in the form of `test_video.mp4`.
66
+
67
+ ```
68
+ conda activate ego_blur
69
+ ```
70
+
71
+ #### demo command for face blurring on the demo\_assets image
72
+
73
+ ```
74
+ python script/demo_ego_blur.py --face_model_path /home/${USER}/ego_blur_assets/ego_blur_face.jit --input_image_path demo_assets/test_image.jpg --output_image_path /home/${USER}/ego_blur_assets/test_image_output.jpg
75
+ ```
76
+
77
+ #### demo command for face blurring on an image using default arguments
78
+
79
+ ```
80
+ python script/demo_ego_blur.py --face_model_path /home/${USER}/ego_blur_assets/ego_blur_face.jit --input_image_path /home/${USER}/ego_blur_assets/test_image.jpg --output_image_path /home/${USER}/ego_blur_assets/test_image_output.jpg
81
+ ```
82
+
83
+ #### demo command for face blurring on an image
84
+
85
+ ```
86
+ python script/demo_ego_blur.py --face_model_path /home/${USER}/ego_blur_assets/ego_blur_face.jit --input_image_path /home/${USER}/ego_blur_assets/test_image.jpg --output_image_path /home/${USER}/ego_blur_assets/test_image_output.jpg --face_model_score_threshold 0.9 --nms_iou_threshold 0.3 --scale_factor_detections 1.15
87
+ ```
88
+
89
+ #### demo command for license plate blurring on an image
90
+
91
+ ```
92
+ python script/demo_ego_blur.py --lp_model_path /home/${USER}/ego_blur_assets/ego_blur_lp.jit --input_image_path /home/${USER}/ego_blur_assets/test_image.jpg --output_image_path /home/${USER}/ego_blur_assets/test_image_output.jpg --lp_model_score_threshold 0.9 --nms_iou_threshold 0.3 --scale_factor_detections 1
93
+ ```
94
+
95
+ #### demo command for face blurring and license plate blurring on an input image and video
96
+
97
+ ```
98
+ python script/demo_ego_blur.py --face_model_path /home/${USER}/ego_blur_assets/ego_blur_face.jit --lp_model_path /home/${USER}/ego_blur_assets/ego_blur_lp.jit --input_image_path /home/${USER}/ego_blur_assets/test_image.jpg --output_image_path /home/${USER}/ego_blur_assets/test_image_output.jpg --input_video_path /home/${USER}/ego_blur_assets/test_video.mp4 --output_video_path /home/${USER}/ego_blur_assets/test_video_output.mp4 --face_model_score_threshold 0.9 --lp_model_score_threshold 0.9 --nms_iou_threshold 0.3 --scale_factor_detections 1 --output_video_fps 20
99
+ ```
100
+
101
+ ## License
102
+
103
+ The model is licensed under the [Apache 2.0 license](https://github.com/facebookresearch/EgoBlur/blob/main/LICENSE).
104
+
105
+ ## Contributing
106
+
107
+ See [contributing](https://github.com/facebookresearch/EgoBlur/blob/main/CONTRIBUTING.md) and the [code of conduct](https://github.com/facebookresearch/EgoBlur/blob/main/CODE_OF_CONDUCT.md).
108
+
109
+ ## Citing EgoBlur
110
+
111
+ If you use EgoBlur in your research, please use the following BibTeX entry.
112
+
113
+ ```
114
+ @misc{raina2023egoblur,
115
+ title={EgoBlur: Responsible Innovation in Aria},
116
+ author={Nikhil Raina and Guruprasad Somasundaram and Kang Zheng and Sagar Miglani and Steve Saarinen and Jeff Meissner and Mark Schwesinger and Luis Pesqueira and Ishita Prasad and Edward Miller and Prince Gupta and Mingfei Yan and Richard Newcombe and Carl Ren and Omkar M Parkhi},
117
+ year={2023},
118
+ eprint={2308.13093},
119
+ archivePrefix={arXiv},
120
+ primaryClass={cs.CV}
121
+ }
122
+ ```
123
+