OriLib commited on
Commit
30a4470
1 Parent(s): 01df834

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -183
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: other
3
- license_name: bria-rmbg-1.4
4
  license_link: https://bria.ai/bria-huggingface-model-license-agreement/
5
  pipeline_tag: image-segmentation
6
  tags:
@@ -27,6 +27,62 @@ Developed by BRIA AI, RMBG v2.0 is available as a source-available model for non
27
  ![examples](t4.png)
28
 
29
  ## Model Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ### Model Description
32
 
@@ -59,200 +115,47 @@ Developed by BRIA AI, RMBG v2.0 is available as a source-available model for non
59
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
60
 
61
  ```python
62
- # Imports
63
  from PIL import Image
64
  import matplotlib.pyplot as plt
65
  import torch
66
  from torchvision import transforms
67
  from models.birefnet import BiRefNet
68
 
69
- birefnet = BiRefNet.from_pretrained('ZhengPeng7/BiRefNet')
70
  torch.set_float32_matmul_precision(['high', 'highest'][0])
71
  birefnet.to('cuda')
72
  birefnet.eval()
73
 
74
- def extract_object(birefnet, imagepath):
75
- # Data settings
76
- image_size = (1024, 1024)
77
- transform_image = transforms.Compose([
78
- transforms.Resize(image_size),
79
- transforms.ToTensor(),
80
- transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
81
- ])
82
-
83
- image = Image.open(imagepath)
84
- input_images = transform_image(image).unsqueeze(0).to('cuda')
85
-
86
- # Prediction
87
- with torch.no_grad():
88
- preds = birefnet(input_images)[-1].sigmoid().cpu()
89
- pred = preds[0].squeeze()
90
- pred_pil = transforms.ToPILImage()(pred)
91
- mask = pred_pil.resize(image.size)
92
- image.putalpha(mask)
93
- return image, mask
94
-
95
- # Visualization
96
- plt.axis("off")
97
- plt.imshow(extract_object(birefnet, imagepath='PATH-TO-YOUR_IMAGE.jpg')[0])
98
- plt.show()
99
-
100
  ```
101
 
102
 
103
- [More Information Needed]
104
-
105
- ### Downstream Use [optional]
106
-
107
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
108
-
109
- [More Information Needed]
110
-
111
- ### Out-of-Scope Use
112
-
113
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
114
-
115
- [More Information Needed]
116
-
117
- ## Bias, Risks, and Limitations
118
-
119
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
120
-
121
- [More Information Needed]
122
-
123
- ### Recommendations
124
-
125
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
126
-
127
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
128
-
129
- ## How to Get Started with the Model
130
-
131
- Use the code below to get started with the model.
132
-
133
- [More Information Needed]
134
-
135
- ## Training Details
136
-
137
- ### Training Data
138
-
139
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
140
-
141
- [More Information Needed]
142
-
143
- ### Training Procedure
144
-
145
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
146
-
147
- #### Preprocessing [optional]
148
-
149
- [More Information Needed]
150
-
151
-
152
- #### Training Hyperparameters
153
-
154
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
155
-
156
- #### Speeds, Sizes, Times [optional]
157
-
158
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
159
-
160
- [More Information Needed]
161
-
162
- ## Evaluation
163
-
164
- <!-- This section describes the evaluation protocols and provides the results. -->
165
-
166
- ### Testing Data, Factors & Metrics
167
-
168
- #### Testing Data
169
-
170
- <!-- This should link to a Dataset Card if possible. -->
171
-
172
- [More Information Needed]
173
-
174
- #### Factors
175
-
176
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
177
-
178
- [More Information Needed]
179
-
180
- #### Metrics
181
-
182
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
183
-
184
- [More Information Needed]
185
-
186
- ### Results
187
-
188
- [More Information Needed]
189
-
190
- #### Summary
191
-
192
-
193
-
194
- ## Model Examination [optional]
195
-
196
- <!-- Relevant interpretability work for the model goes here -->
197
-
198
- [More Information Needed]
199
-
200
- ## Environmental Impact
201
-
202
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
203
-
204
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
205
-
206
- - **Hardware Type:** [More Information Needed]
207
- - **Hours used:** [More Information Needed]
208
- - **Cloud Provider:** [More Information Needed]
209
- - **Compute Region:** [More Information Needed]
210
- - **Carbon Emitted:** [More Information Needed]
211
-
212
- ## Technical Specifications [optional]
213
-
214
- ### Model Architecture and Objective
215
-
216
- [More Information Needed]
217
-
218
- ### Compute Infrastructure
219
-
220
- [More Information Needed]
221
-
222
- #### Hardware
223
-
224
- [More Information Needed]
225
-
226
- #### Software
227
-
228
- [More Information Needed]
229
-
230
- ## Citation [optional]
231
-
232
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
233
 
234
- **BibTeX:**
235
-
236
- [More Information Needed]
237
-
238
- **APA:**
239
-
240
- [More Information Needed]
241
-
242
- ## Glossary [optional]
243
-
244
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
245
-
246
- [More Information Needed]
247
-
248
- ## More Information [optional]
249
-
250
- [More Information Needed]
251
-
252
- ## Model Card Authors [optional]
253
-
254
- [More Information Needed]
255
-
256
- ## Model Card Contact
257
-
258
- [More Information Needed]
 
1
  ---
2
  license: other
3
+ license_name: bria-rmbg-2.0
4
  license_link: https://bria.ai/bria-huggingface-model-license-agreement/
5
  pipeline_tag: image-segmentation
6
  tags:
 
27
  ![examples](t4.png)
28
 
29
  ## Model Details
30
+ #####
31
+ ### Model Description
32
+
33
+ - **Developed by:** [BRIA AI](https://bria.ai/)
34
+ - **Model type:** Background Removal
35
+ - **License:** [bria-rmbg-2.0](https://bria.ai/bria-huggingface-model-license-agreement/)
36
+ - The model is released under a Creative Commons license for non-commercial use.
37
+ - Commercial use is subject to a commercial agreement with BRIA. [Contact Us](https://bria.ai/contact-us) for more information.
38
+
39
+ - **Model Description:** BRIA RMBG-2.0 is a segmentation model trained exclusively on a professional-grade dataset.
40
+ - **BRIA:** Resources for more information: [BRIA AI](https://bria.ai/)
41
+
42
+
43
+
44
+ ## Training data
45
+ Bria-RMBG model was trained with over 15,000 high-quality, high-resolution, manually labeled (pixel-wise accuracy), fully licensed images.
46
+ Our benchmark included balanced gender, balanced ethnicity, and people with different types of disabilities.
47
+ For clarity, we provide our data distribution according to different categories, demonstrating our model’s versatility.
48
+
49
+ ### Distribution of images:
50
+
51
+ | Category | Distribution |
52
+ | -----------------------------------| -----------------------------------:|
53
+ | Objects only | 45.11% |
54
+ | People with objects/animals | 25.24% |
55
+ | People only | 17.35% |
56
+ | people/objects/animals with text | 8.52% |
57
+ | Text only | 2.52% |
58
+ | Animals only | 1.89% |
59
+
60
+ | Category | Distribution |
61
+ | -----------------------------------| -----------------------------------------:|
62
+ | Photorealistic | 87.70% |
63
+ | Non-Photorealistic | 12.30% |
64
+
65
+
66
+ | Category | Distribution |
67
+ | -----------------------------------| -----------------------------------:|
68
+ | Non Solid Background | 52.05% |
69
+ | Solid Background | 47.95%
70
+
71
+
72
+ | Category | Distribution |
73
+ | -----------------------------------| -----------------------------------:|
74
+ | Single main foreground object | 51.42% |
75
+ | Multiple objects in the foreground | 48.58% |
76
+
77
+
78
+ ## Qualitative Evaluation
79
+
80
+ ![examples](results.png)
81
+
82
+ Architecture
83
+ RMBG-2.0 is developed on the BiRefNet enhanced with our proprietary dataset. This training data significantly improve the model’s accuracy and effectiveness for background-removal task.
84
+
85
+ #####
86
 
87
  ### Model Description
88
 
 
115
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
116
 
117
  ```python
 
118
  from PIL import Image
119
  import matplotlib.pyplot as plt
120
  import torch
121
  from torchvision import transforms
122
  from models.birefnet import BiRefNet
123
 
124
+ birefnet = BiRefNet.from_pretrained('briaai/RMBG-2.0')
125
  torch.set_float32_matmul_precision(['high', 'highest'][0])
126
  birefnet.to('cuda')
127
  birefnet.eval()
128
 
129
+ # Data settings
130
+ image_size = (1024, 1024)
131
+ transform_image = transforms.Compose([
132
+ transforms.Resize(image_size),
133
+ transforms.ToTensor(),
134
+ transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
135
+ ])
136
+
137
+ image = Image.open(input_image_path)
138
+ input_images = transform_image(image).unsqueeze(0).to('cuda')
139
+
140
+ # Prediction
141
+ with torch.no_grad():
142
+ preds = birefnet(input_images)[-1].sigmoid().cpu()
143
+ pred = preds[0].squeeze()
144
+ pred_pil = transforms.ToPILImage()(pred)
145
+ mask = pred_pil.resize(image.size)
146
+ image.putalpha(mask)
147
+
148
+ image.save("no_bg_image.png")
 
 
 
 
 
 
149
  ```
150
 
151
 
152
+ ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
154
 
155
+ ```
156
+ @article{BiRefNet,
157
+ title={Bilateral Reference for High-Resolution Dichotomous Image Segmentation},
158
+ author={Zheng, Peng and Gao, Dehong and Fan, Deng-Ping and Liu, Li and Laaksonen, Jorma and Ouyang, Wanli and Sebe, Nicu},
159
+ journal={CAAI Artificial Intelligence Research},
160
+ year={2024}
161
+ }