drjpepper44 commited on
Commit
30a81d6
1 Parent(s): 92c2917

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -48
README.md CHANGED
@@ -54,9 +54,9 @@ See the [Glossary](#Glossary) below for a detailed list of the properties genera
54
 
55
 
56
  - **Developed by:** Joel Pepper and Kevin Karnani
57
- - **Shared by [optional]:** [More Information Needed]
58
- - **Model type:** [More Information Needed]
59
- - **Language(s) (NLP):** [More Information Needed]
60
  - **License:** MIT <!-- As listed on the repo -->
61
  - **Finetuned from model:** [detectron2 v0.6](https://github.com/facebookresearch/detectron2)
62
 
@@ -89,30 +89,31 @@ Object detection is currently being performed on 5 detection classes (fish, fish
89
  8. These do not need to be adhered to if properly set up/modified for a specific use case.
90
 
91
 
92
- ### Downstream Use [optional]
 
93
 
94
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
95
 
96
  [More Information Needed]
97
 
98
  ### Out-of-Scope Use
99
 
100
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
101
 
102
- [More Information Needed]
103
 
104
  ## Bias, Risks, and Limitations
105
 
106
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
107
 
108
- This model was trained solely for use on fish specimens.
 
 
109
 
110
  <!-- [More Information Needed] -->
111
 
112
  The authors have declared that no conflict of interest exists.
113
 
114
 
115
- ### Recommendations
116
 
117
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
118
 
@@ -213,21 +214,15 @@ pipenv run python3 train_model.py
213
 
214
  #### Preprocessing [optional]
215
 
216
- [More Information Needed]
217
-
218
 
219
  #### Training Hyperparameters
220
 
221
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
222
-
223
- #### Speeds, Sizes, Times [optional]
224
-
225
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
226
-
227
- [More Information Needed]
228
 
229
  ## Evaluation
230
 
 
231
  <!-- This section describes the evaluation protocols and provides the results. -->
232
 
233
  ### Testing Data, Factors & Metrics
@@ -235,33 +230,25 @@ pipenv run python3 train_model.py
235
  #### Testing Data
236
 
237
  <!-- This should link to a Data Card if possible. -->
238
-
239
- [More Information Needed]
240
 
241
  #### Factors
242
 
 
243
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
244
 
245
- [More Information Needed]
246
-
247
  #### Metrics
248
 
 
249
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
250
 
251
- [More Information Needed]
252
 
253
  ### Results
254
 
255
- [More Information Needed]
256
 
257
  #### Summary
258
 
259
-
260
-
261
- ## Model Examination
262
-
263
- <!-- Relevant interpretability work for the model goes here -->
264
-
265
  ### Goal
266
 
267
  To develop a tool to check the validity of metadata associated with an image, and generate things that are missing. Also includes various geometric and statistical properties on the mask generated over the biological specimen presented.
@@ -279,33 +266,18 @@ The metadata generated produces various statistical and geometric properties of
279
 
280
  ## Environmental Impact
281
 
282
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
283
-
284
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://doi.org/10.48550/arXiv.1910.09700).
285
-
286
- - **Hardware Type:** [More Information Needed]
287
- - **Hours used:** [More Information Needed]
288
- - **Cloud Provider:** [More Information Needed]
289
- - **Compute Region:** [More Information Needed]
290
- - **Carbon Emitted:** [More Information Needed]
291
 
292
  ## Technical Specifications [optional]
293
 
294
  ### Model Architecture and Objective
295
 
296
- [More Information Needed]
297
 
298
  ### Compute Infrastructure
299
 
300
- [More Information Needed]
301
-
302
- #### Hardware
303
-
304
- [More Information Needed]
305
-
306
- #### Software
307
 
308
- [More Information Needed]
309
 
310
  ## Citation
311
 
 
54
 
55
 
56
  - **Developed by:** Joel Pepper and Kevin Karnani
57
+ <!--- **Shared by [optional]:** [More Information Needed]-->
58
+ - **Model type:** Pytorch pickle file (.pth)
59
+ <!--- **Language(s) (NLP):** [More Information Needed]-->
60
  - **License:** MIT <!-- As listed on the repo -->
61
  - **Finetuned from model:** [detectron2 v0.6](https://github.com/facebookresearch/detectron2)
62
 
 
89
  8. These do not need to be adhered to if properly set up/modified for a specific use case.
90
 
91
 
92
+ <!--### Downstream Use [optional]
93
+
94
 
 
95
 
96
  [More Information Needed]
97
 
98
  ### Out-of-Scope Use
99
 
 
100
 
101
+ [More Information Needed]-->
102
 
103
  ## Bias, Risks, and Limitations
104
 
105
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
106
 
107
+ - This model was trained solely for use on fish specimens.
108
+ - The model can detect and process multiple fish within a single image, although the capability is not extensively tested.
109
+ - The model was only trained on rectangular, machine printed tags that are aligned with the image (i.e. tags placed at an angle may not be handled correctly).
110
 
111
  <!-- [More Information Needed] -->
112
 
113
  The authors have declared that no conflict of interest exists.
114
 
115
 
116
+ <!--### Recommendations-->
117
 
118
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
119
 
 
214
 
215
  #### Preprocessing [optional]
216
 
217
+ - Manual image preprocessing is not necessary. Some versions of the code do however contrast enhance the images internally (see [Citation](#Citation))
 
218
 
219
  #### Training Hyperparameters
220
 
221
+ - See [Citation](#Citation)/source code.
 
 
 
 
 
 
222
 
223
  ## Evaluation
224
 
225
+ - See [Citation](#Citation)
226
  <!-- This section describes the evaluation protocols and provides the results. -->
227
 
228
  ### Testing Data, Factors & Metrics
 
230
  #### Testing Data
231
 
232
  <!-- This should link to a Data Card if possible. -->
233
+ - See [Citation](#Citation)
 
234
 
235
  #### Factors
236
 
237
+ - See [Citation](#Citation)
238
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
239
 
 
 
240
  #### Metrics
241
 
242
+ - See [Citation](#Citation)
243
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
244
 
 
245
 
246
  ### Results
247
 
248
+ - See [Citation](#Citation)
249
 
250
  #### Summary
251
 
 
 
 
 
 
 
252
  ### Goal
253
 
254
  To develop a tool to check the validity of metadata associated with an image, and generate things that are missing. Also includes various geometric and statistical properties on the mask generated over the biological specimen presented.
 
266
 
267
  ## Environmental Impact
268
 
269
+ Extremely minimal as a regular workstation computer was used for this paper.
 
 
 
 
 
 
 
 
270
 
271
  ## Technical Specifications [optional]
272
 
273
  ### Model Architecture and Objective
274
 
275
+ - See [Citation](#Citation)
276
 
277
  ### Compute Infrastructure
278
 
279
+ - Desktop computer with an Intel(R) Xeon(R) W-2175 CPU and an Nvidia Quadro RTX 4000 GPU.
 
 
 
 
 
 
280
 
 
281
 
282
  ## Citation
283