Add links to paper, demo, and add OpenRAIL-M license to yml of README, rename license file for proper formatting.
Browse files- LICENSE → LICENSE.md +0 -0
- README.md +47 -34
LICENSE → LICENSE.md
RENAMED
File without changes
|
README.md
CHANGED
@@ -1,5 +1,7 @@
|
|
1 |
---
|
2 |
-
license:
|
|
|
|
|
3 |
language:
|
4 |
- en
|
5 |
library_name: open_clip
|
@@ -21,6 +23,7 @@ datasets:
|
|
21 |
- TreeOfLife-10M
|
22 |
- iNat21
|
23 |
- BIOSCAN-1M
|
|
|
24 |
---
|
25 |
|
26 |
|
@@ -46,14 +49,16 @@ In this way, BioCLIP offers potential to aid biologists in discovery of new and
|
|
46 |
|
47 |
- **Developed by:** Samuel Stevens, Jiaman Wu, Matthew J. Thompson, Elizabeth G. Campolongo, Chan Hee Song, David Edward Carlyn, Li Dong, Wasila M. Dahdul, Charles Stewart, Tanya Berger-Wolf, Wei-Lun Chao, and Yu Su
|
48 |
- **Model type:** Vision Transformer (ViT-B/16)
|
49 |
-
- **License:** MIT
|
50 |
- **Fine-tuned from model:** OpenAI CLIP, ViT-B/16
|
51 |
|
|
|
|
|
52 |
### Model Sources
|
53 |
|
54 |
- **Repository:** [BioCLIP](https://github.com/Imageomics/BioCLIP)
|
55 |
-
- **Paper:**
|
56 |
-
- **Demo:** [BioCLIP Demo](https://huggingface.co/spaces/imageomics/
|
57 |
|
58 |
## Uses
|
59 |
|
@@ -63,8 +68,8 @@ The ViT-B/16 vision encoder is recommended as a base model for any computer visi
|
|
63 |
|
64 |
### Direct Use
|
65 |
|
66 |
-
See the demo [here](https://huggingface.co/spaces/imageomics/
|
67 |
-
It can also be used in a few-shot setting with a KNN; please see [our paper]() for details for both few-shot and zero-shot settings without fine-tuning.
|
68 |
|
69 |
|
70 |
## Bias, Risks, and Limitations
|
@@ -119,7 +124,7 @@ We tested BioCLIP on the following collection of 10 biologically-relevant tasks.
|
|
119 |
- [Birds 525](https://www.kaggle.com/datasets/gpiosenka/100-bird-species): We evaluated on the 2,625 test images provided with the dataset.
|
120 |
- [Rare Species](https://huggingface.co/datasets/imageomics/rare-species): A new dataset we curated for the purpose of testing this model and to contribute to the ML for Conservation community. It consists of 400 species labeled Near Threatened through Extinct in the Wild by the [IUCN Red List](https://www.iucnredlist.org/), with 30 images per species. For more information, see our dataset, [Rare Species](https://huggingface.co/datasets/imageomics/rare-species).
|
121 |
|
122 |
-
For more information about the contents of these datasets, see Table 2 and associated sections of [our paper]().
|
123 |
|
124 |
### Metrics
|
125 |
|
@@ -129,7 +134,7 @@ We use top-1 and top-5 accuracy to evaluate models, and validation loss to choos
|
|
129 |
|
130 |
We compare BioCLIP to OpenAI's CLIP and OpenCLIP's LAION-2B checkpoint.
|
131 |
Here are the zero-shot classification results on our benchmark tasks.
|
132 |
-
Please see [our paper]() for few-shot results.
|
133 |
|
134 |
<table cellpadding="0" cellspacing="0">
|
135 |
<thead>
|
@@ -219,7 +224,7 @@ BioCLIP outperforms general-domain baselines by 18% on average.
|
|
219 |
|
220 |
### Model Examination
|
221 |
|
222 |
-
We encourage readers to see Section 4.6 of [our paper]().
|
223 |
In short, BioCLIP forms representations that more closely align to the taxonomic hierarchy compared to general-domain baselines like CLIP or OpenCLIP.
|
224 |
|
225 |
|
@@ -238,39 +243,47 @@ In short, BioCLIP forms representations that more closely align to the taxonomic
|
|
238 |
}
|
239 |
```
|
240 |
|
241 |
-
Please also cite our paper:
|
242 |
-
|
243 |
```
|
244 |
@article{stevens2023bioclip,
|
245 |
-
title
|
246 |
-
author
|
247 |
-
|
248 |
-
|
249 |
-
|
250 |
-
|
251 |
}
|
|
|
252 |
```
|
253 |
-
-->
|
254 |
|
255 |
-
|
|
|
256 |
```
|
257 |
-
@software{
|
258 |
-
author
|
259 |
-
|
260 |
-
|
261 |
-
|
262 |
-
|
263 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
264 |
}
|
265 |
```
|
266 |
-
|
267 |
-
**APA:**
|
268 |
-
|
269 |
-
Stevens, S, Wu, J., Thompson, M. J., Campolongo, E. G., Song, C. H., Carlyn, D. E., Dong, L., Dahdul, W. M., Stewart, C., Berger-Wolf, T., Chao, W.L., & Su, Y. (2023) BioCLIP (Version v0.1) [Computer software]. https://doi.org/
|
270 |
-
|
271 |
-
Please also cite OpenCLIP:
|
272 |
-
|
273 |
-
Ilharco, G., Wortsman, M., Wightman, R., Gordon, C., Carlini, N., Taori, R., Dave, A., Shankar, V., Namkoong, H., Miller, J., Hajishirzi, H., Farhadi, A., & Schmidt, L. (2021). OpenCLIP (Version v0.1) [Computer software]. https://doi.org/10.5281/zenodo.5143773
|
274 |
|
275 |
## Acknowledgements
|
276 |
|
|
|
1 |
---
|
2 |
+
license:
|
3 |
+
- mit
|
4 |
+
- bigscience-openrail-m
|
5 |
language:
|
6 |
- en
|
7 |
library_name: open_clip
|
|
|
23 |
- TreeOfLife-10M
|
24 |
- iNat21
|
25 |
- BIOSCAN-1M
|
26 |
+
- EOL
|
27 |
---
|
28 |
|
29 |
|
|
|
49 |
|
50 |
- **Developed by:** Samuel Stevens, Jiaman Wu, Matthew J. Thompson, Elizabeth G. Campolongo, Chan Hee Song, David Edward Carlyn, Li Dong, Wasila M. Dahdul, Charles Stewart, Tanya Berger-Wolf, Wei-Lun Chao, and Yu Su
|
51 |
- **Model type:** Vision Transformer (ViT-B/16)
|
52 |
+
- **License:** MIT, OpenRAIL-M
|
53 |
- **Fine-tuned from model:** OpenAI CLIP, ViT-B/16
|
54 |
|
55 |
+
This model was developed for the benefit of the community as an open-source product, thus we request that any derivative products are also open-source.
|
56 |
+
|
57 |
### Model Sources
|
58 |
|
59 |
- **Repository:** [BioCLIP](https://github.com/Imageomics/BioCLIP)
|
60 |
+
- **Paper:** BioCLIP: A Vision Foundation Model for the Tree of Life ([arXiv](https://doi.org/10.48550/arXiv.2311.18803))
|
61 |
+
- **Demo:** [BioCLIP Demo](https://huggingface.co/spaces/imageomics/bioclip-demo)
|
62 |
|
63 |
## Uses
|
64 |
|
|
|
68 |
|
69 |
### Direct Use
|
70 |
|
71 |
+
See the demo [here](https://huggingface.co/spaces/imageomics/bioclip-demo) for examples of zero-shot classification.
|
72 |
+
It can also be used in a few-shot setting with a KNN; please see [our paper](https://doi.org/10.48550/arXiv.2311.18803) for details for both few-shot and zero-shot settings without fine-tuning.
|
73 |
|
74 |
|
75 |
## Bias, Risks, and Limitations
|
|
|
124 |
- [Birds 525](https://www.kaggle.com/datasets/gpiosenka/100-bird-species): We evaluated on the 2,625 test images provided with the dataset.
|
125 |
- [Rare Species](https://huggingface.co/datasets/imageomics/rare-species): A new dataset we curated for the purpose of testing this model and to contribute to the ML for Conservation community. It consists of 400 species labeled Near Threatened through Extinct in the Wild by the [IUCN Red List](https://www.iucnredlist.org/), with 30 images per species. For more information, see our dataset, [Rare Species](https://huggingface.co/datasets/imageomics/rare-species).
|
126 |
|
127 |
+
For more information about the contents of these datasets, see Table 2 and associated sections of [our paper](https://doi.org/10.48550/arXiv.2311.18803).
|
128 |
|
129 |
### Metrics
|
130 |
|
|
|
134 |
|
135 |
We compare BioCLIP to OpenAI's CLIP and OpenCLIP's LAION-2B checkpoint.
|
136 |
Here are the zero-shot classification results on our benchmark tasks.
|
137 |
+
Please see [our paper](https://doi.org/10.48550/arXiv.2311.18803) for few-shot results.
|
138 |
|
139 |
<table cellpadding="0" cellspacing="0">
|
140 |
<thead>
|
|
|
224 |
|
225 |
### Model Examination
|
226 |
|
227 |
+
We encourage readers to see Section 4.6 of [our paper](https://doi.org/10.48550/arXiv.2311.18803).
|
228 |
In short, BioCLIP forms representations that more closely align to the taxonomic hierarchy compared to general-domain baselines like CLIP or OpenCLIP.
|
229 |
|
230 |
|
|
|
243 |
}
|
244 |
```
|
245 |
|
246 |
+
Please also cite our paper:
|
247 |
+
|
248 |
```
|
249 |
@article{stevens2023bioclip,
|
250 |
+
title = {BIOCLIP: A Vision Foundation Model for the Tree of Life},
|
251 |
+
author = {Samuel Stevens and Jiaman Wu and Matthew J Thompson and Elizabeth G Campolongo and Chan Hee Song and David Edward Carlyn and Li Dong and Wasila M Dahdul and Charles Stewart and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
|
252 |
+
year = {2023},
|
253 |
+
eprint = {2311.18803},
|
254 |
+
archivePrefix = {arXiv},
|
255 |
+
primaryClass = {cs.CV}
|
256 |
}
|
257 |
+
|
258 |
```
|
|
|
259 |
|
260 |
+
|
261 |
+
Please also consider citing OpenCLIP, iNat21 and BIOSCAN-1M:
|
262 |
```
|
263 |
+
@software{ilharco_gabriel_2021_5143773,
|
264 |
+
author={Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig},
|
265 |
+
title={OpenCLIP},
|
266 |
+
year={2021},
|
267 |
+
doi={10.5281/zenodo.5143773},
|
268 |
+
}
|
269 |
+
```
|
270 |
+
```
|
271 |
+
@misc{inat2021,
|
272 |
+
author={Van Horn, Grant and Mac Aodha, Oisin},
|
273 |
+
title={iNat Challenge 2021 - FGVC8},
|
274 |
+
publisher={Kaggle},
|
275 |
+
year={2021},
|
276 |
+
url={https://kaggle.com/competitions/inaturalist-2021}
|
277 |
+
}
|
278 |
+
```
|
279 |
+
```
|
280 |
+
@inproceedings{gharaee2023step,
|
281 |
+
author={Gharaee, Z. and Gong, Z. and Pellegrino, N. and Zarubiieva, I. and Haurum, J. B. and Lowe, S. C. and McKeown, J. T. A. and Ho, C. Y. and McLeod, J. and Wei, Y. C. and Agda, J. and Ratnasingham, S. and Steinke, D. and Chang, A. X. and Taylor, G. W. and Fieguth, P.},
|
282 |
+
title={A Step Towards Worldwide Biodiversity Assessment: The {BIOSCAN-1M} Insect Dataset},
|
283 |
+
booktitle={Advances in Neural Information Processing Systems ({NeurIPS}) Datasets \& Benchmarks Track},
|
284 |
+
year={2023},
|
285 |
}
|
286 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
287 |
|
288 |
## Acknowledgements
|
289 |
|