Fix some mistakes of model names in README.
Browse files
README.md
CHANGED
@@ -1,84 +1,84 @@
|
|
1 |
-
---
|
2 |
-
library_name: BiRefNet
|
3 |
-
tags:
|
4 |
-
- background-removal
|
5 |
-
- mask-generation
|
6 |
-
- Image Matting
|
7 |
-
- pytorch_model_hub_mixin
|
8 |
-
- model_hub_mixin
|
9 |
-
repo_url: https://github.com/ZhengPeng7/BiRefNet
|
10 |
-
pipeline_tag: image-segmentation
|
11 |
-
---
|
12 |
-
<h1 align="center">Bilateral Reference for High-Resolution Dichotomous Image Segmentation</h1>
|
13 |
-
|
14 |
-
<div align='center'>
|
15 |
-
<a href='https://scholar.google.com/citations?user=TZRzWOsAAAAJ' target='_blank'><strong>Peng Zheng</strong></a><sup> 1,4,5,6</sup>, 
|
16 |
-
<a href='https://scholar.google.com/citations?user=0uPb8MMAAAAJ' target='_blank'><strong>Dehong Gao</strong></a><sup> 2</sup>, 
|
17 |
-
<a href='https://scholar.google.com/citations?user=kakwJ5QAAAAJ' target='_blank'><strong>Deng-Ping Fan</strong></a><sup> 1*</sup>, 
|
18 |
-
<a href='https://scholar.google.com/citations?user=9cMQrVsAAAAJ' target='_blank'><strong>Li Liu</strong></a><sup> 3</sup>, 
|
19 |
-
<a href='https://scholar.google.com/citations?user=qQP6WXIAAAAJ' target='_blank'><strong>Jorma Laaksonen</strong></a><sup> 4</sup>, 
|
20 |
-
<a href='https://scholar.google.com/citations?user=pw_0Z_UAAAAJ' target='_blank'><strong>Wanli Ouyang</strong></a><sup> 5</sup>, 
|
21 |
-
<a href='https://scholar.google.com/citations?user=stFCYOAAAAAJ' target='_blank'><strong>Nicu Sebe</strong></a><sup> 6</sup>
|
22 |
-
</div>
|
23 |
-
|
24 |
-
<div align='center'>
|
25 |
-
<sup>1 </sup>Nankai University  <sup>2 </sup>Northwestern Polytechnical University  <sup>3 </sup>National University of Defense Technology  <sup>4 </sup>Aalto University  <sup>5 </sup>Shanghai AI Laboratory  <sup>6 </sup>University of Trento 
|
26 |
-
</div>
|
27 |
-
|
28 |
-
<div align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
|
29 |
-
<a href='https://arxiv.org/pdf/2401.03407'><img src='https://img.shields.io/badge/arXiv-BiRefNet-red'></a> 
|
30 |
-
<a href='https://drive.google.com/file/d/1aBnJ_R9lbnC2dm8dqD0-pzP2Cu-U1Xpt/view?usp=drive_link'><img src='https://img.shields.io/badge/中文版-BiRefNet-red'></a> 
|
31 |
-
<a href='https://www.birefnet.top'><img src='https://img.shields.io/badge/Page-BiRefNet-red'></a> 
|
32 |
-
<a href='https://drive.google.com/drive/folders/1s2Xe0cjq-2ctnJBR24563yMSCOu4CcxM'><img src='https://img.shields.io/badge/Drive-Stuff-green'></a> 
|
33 |
-
<a href='LICENSE'><img src='https://img.shields.io/badge/License-MIT-yellow'></a> 
|
34 |
-
<a href='https://huggingface.co/spaces/ZhengPeng7/BiRefNet_demo'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HF%20Spaces-BiRefNet-blue'></a> 
|
35 |
-
<a href='https://huggingface.co/ZhengPeng7/BiRefNet'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HF%20Models-BiRefNet-blue'></a> 
|
36 |
-
<a href='https://colab.research.google.com/drive/14Dqg7oeBkFEtchaHLNpig2BcdkZEogba?usp=drive_link'><img src='https://img.shields.io/badge/Single_Image_Inference-F9AB00?style=for-the-badge&logo=googlecolab&color=525252'></a> 
|
37 |
-
<a href='https://colab.research.google.com/drive/1MaEiBfJ4xIaZZn0DqKrhydHB8X97hNXl#scrollTo=DJ4meUYjia6S'><img src='https://img.shields.io/badge/Inference_&_Evaluation-F9AB00?style=for-the-badge&logo=googlecolab&color=525252'></a> 
|
38 |
-
</div>
|
39 |
-
|
40 |
-
## This repo holds the official weights of
|
41 |
-
|
42 |
-
### Training Sets:
|
43 |
-
+ P3M-10k (except TE-P3M-500-NP)
|
44 |
-
+ TR-humans
|
45 |
-
+ AM-2k
|
46 |
-
+ AIM-500
|
47 |
-
+ Human-2k (synthesized with BG-20k)
|
48 |
-
+ Distinctions-646 (synthesized with BG-20k)
|
49 |
-
+ HIM2K
|
50 |
-
+ PPM-100
|
51 |
-
|
52 |
-
|
53 |
-
### Validation Sets:
|
54 |
-
+ TE-P3M-500-NP
|
55 |
-
|
56 |
-
### Performance:
|
57 |
-
| Dataset | Method | maxFm | wFmeasure | MSE | Smeasure | meanEm | HCE | maxEm | meanFm | adpEm | adpFm | mBA | maxBIoU | meanBIoU |
|
58 |
-
| :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: |
|
59 |
-
| TE-P3M-500-NP |
|
60 |
-
|
61 |
-
|
62 |
-
**Check the main BiRefNet model repo for more info and how to use it:**
|
63 |
-
https://huggingface.co/ZhengPeng7/BiRefNet/blob/main/README.md
|
64 |
-
|
65 |
-
**Also check the GitHub repo of BiRefNet for all things you may want:**
|
66 |
-
https://github.com/ZhengPeng7/BiRefNet
|
67 |
-
|
68 |
-
## Acknowledgement:
|
69 |
-
|
70 |
-
+ Many thanks to @freepik for their generous support on GPU resources for training this model!
|
71 |
-
|
72 |
-
|
73 |
-
## Citation
|
74 |
-
|
75 |
-
```
|
76 |
-
@article{zheng2024birefnet,
|
77 |
-
title={Bilateral Reference for High-Resolution Dichotomous Image Segmentation},
|
78 |
-
author={Zheng, Peng and Gao, Dehong and Fan, Deng-Ping and Liu, Li and Laaksonen, Jorma and Ouyang, Wanli and Sebe, Nicu},
|
79 |
-
journal={CAAI Artificial Intelligence Research},
|
80 |
-
volume = {3},
|
81 |
-
pages = {9150038},
|
82 |
-
year={2024}
|
83 |
-
}
|
84 |
-
```
|
|
|
1 |
+
---
|
2 |
+
library_name: BiRefNet
|
3 |
+
tags:
|
4 |
+
- background-removal
|
5 |
+
- mask-generation
|
6 |
+
- Image Matting
|
7 |
+
- pytorch_model_hub_mixin
|
8 |
+
- model_hub_mixin
|
9 |
+
repo_url: https://github.com/ZhengPeng7/BiRefNet
|
10 |
+
pipeline_tag: image-segmentation
|
11 |
+
---
|
12 |
+
<h1 align="center">Bilateral Reference for High-Resolution Dichotomous Image Segmentation</h1>
|
13 |
+
|
14 |
+
<div align='center'>
|
15 |
+
<a href='https://scholar.google.com/citations?user=TZRzWOsAAAAJ' target='_blank'><strong>Peng Zheng</strong></a><sup> 1,4,5,6</sup>, 
|
16 |
+
<a href='https://scholar.google.com/citations?user=0uPb8MMAAAAJ' target='_blank'><strong>Dehong Gao</strong></a><sup> 2</sup>, 
|
17 |
+
<a href='https://scholar.google.com/citations?user=kakwJ5QAAAAJ' target='_blank'><strong>Deng-Ping Fan</strong></a><sup> 1*</sup>, 
|
18 |
+
<a href='https://scholar.google.com/citations?user=9cMQrVsAAAAJ' target='_blank'><strong>Li Liu</strong></a><sup> 3</sup>, 
|
19 |
+
<a href='https://scholar.google.com/citations?user=qQP6WXIAAAAJ' target='_blank'><strong>Jorma Laaksonen</strong></a><sup> 4</sup>, 
|
20 |
+
<a href='https://scholar.google.com/citations?user=pw_0Z_UAAAAJ' target='_blank'><strong>Wanli Ouyang</strong></a><sup> 5</sup>, 
|
21 |
+
<a href='https://scholar.google.com/citations?user=stFCYOAAAAAJ' target='_blank'><strong>Nicu Sebe</strong></a><sup> 6</sup>
|
22 |
+
</div>
|
23 |
+
|
24 |
+
<div align='center'>
|
25 |
+
<sup>1 </sup>Nankai University  <sup>2 </sup>Northwestern Polytechnical University  <sup>3 </sup>National University of Defense Technology  <sup>4 </sup>Aalto University  <sup>5 </sup>Shanghai AI Laboratory  <sup>6 </sup>University of Trento 
|
26 |
+
</div>
|
27 |
+
|
28 |
+
<div align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
|
29 |
+
<a href='https://arxiv.org/pdf/2401.03407'><img src='https://img.shields.io/badge/arXiv-BiRefNet-red'></a> 
|
30 |
+
<a href='https://drive.google.com/file/d/1aBnJ_R9lbnC2dm8dqD0-pzP2Cu-U1Xpt/view?usp=drive_link'><img src='https://img.shields.io/badge/中文版-BiRefNet-red'></a> 
|
31 |
+
<a href='https://www.birefnet.top'><img src='https://img.shields.io/badge/Page-BiRefNet-red'></a> 
|
32 |
+
<a href='https://drive.google.com/drive/folders/1s2Xe0cjq-2ctnJBR24563yMSCOu4CcxM'><img src='https://img.shields.io/badge/Drive-Stuff-green'></a> 
|
33 |
+
<a href='LICENSE'><img src='https://img.shields.io/badge/License-MIT-yellow'></a> 
|
34 |
+
<a href='https://huggingface.co/spaces/ZhengPeng7/BiRefNet_demo'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HF%20Spaces-BiRefNet-blue'></a> 
|
35 |
+
<a href='https://huggingface.co/ZhengPeng7/BiRefNet'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20HF%20Models-BiRefNet-blue'></a> 
|
36 |
+
<a href='https://colab.research.google.com/drive/14Dqg7oeBkFEtchaHLNpig2BcdkZEogba?usp=drive_link'><img src='https://img.shields.io/badge/Single_Image_Inference-F9AB00?style=for-the-badge&logo=googlecolab&color=525252'></a> 
|
37 |
+
<a href='https://colab.research.google.com/drive/1MaEiBfJ4xIaZZn0DqKrhydHB8X97hNXl#scrollTo=DJ4meUYjia6S'><img src='https://img.shields.io/badge/Inference_&_Evaluation-F9AB00?style=for-the-badge&logo=googlecolab&color=525252'></a> 
|
38 |
+
</div>
|
39 |
+
|
40 |
+
## This repo holds the official weights of BiRefNet for general matting.
|
41 |
+
|
42 |
+
### Training Sets:
|
43 |
+
+ P3M-10k (except TE-P3M-500-NP)
|
44 |
+
+ TR-humans
|
45 |
+
+ AM-2k
|
46 |
+
+ AIM-500
|
47 |
+
+ Human-2k (synthesized with BG-20k)
|
48 |
+
+ Distinctions-646 (synthesized with BG-20k)
|
49 |
+
+ HIM2K
|
50 |
+
+ PPM-100
|
51 |
+
|
52 |
+
|
53 |
+
### Validation Sets:
|
54 |
+
+ TE-P3M-500-NP
|
55 |
+
|
56 |
+
### Performance:
|
57 |
+
| Dataset | Method | maxFm | wFmeasure | MSE | Smeasure | meanEm | HCE | maxEm | meanFm | adpEm | adpFm | mBA | maxBIoU | meanBIoU |
|
58 |
+
| :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: |
|
59 |
+
| TE-P3M-500-NP | BiRefNet-matting--epoch_100 | .979 | .996 | .988 | .003 | .997 | .986 | .988 | .864 | .885 | .000 | .830 | .940 | .888 |
|
60 |
+
|
61 |
+
|
62 |
+
**Check the main BiRefNet model repo for more info and how to use it:**
|
63 |
+
https://huggingface.co/ZhengPeng7/BiRefNet/blob/main/README.md
|
64 |
+
|
65 |
+
**Also check the GitHub repo of BiRefNet for all things you may want:**
|
66 |
+
https://github.com/ZhengPeng7/BiRefNet
|
67 |
+
|
68 |
+
## Acknowledgement:
|
69 |
+
|
70 |
+
+ Many thanks to @freepik for their generous support on GPU resources for training this model!
|
71 |
+
|
72 |
+
|
73 |
+
## Citation
|
74 |
+
|
75 |
+
```
|
76 |
+
@article{zheng2024birefnet,
|
77 |
+
title={Bilateral Reference for High-Resolution Dichotomous Image Segmentation},
|
78 |
+
author={Zheng, Peng and Gao, Dehong and Fan, Deng-Ping and Liu, Li and Laaksonen, Jorma and Ouyang, Wanli and Sebe, Nicu},
|
79 |
+
journal={CAAI Artificial Intelligence Research},
|
80 |
+
volume = {3},
|
81 |
+
pages = {9150038},
|
82 |
+
year={2024}
|
83 |
+
}
|
84 |
+
```
|