coco_detection / README.md
whyen-wang's picture
update
c1f562c
---
license: cc-by-4.0
size_categories:
- n<1K
task_categories:
- object-detection
- image-segmentation
language:
- en
pretty_name: COCO Detection
---
# Dataset Card for "COCO Detection"
## Quick Start
### Usage
```python
>>> from datasets.load import load_dataset
>>> dataset = load_dataset('whyen-wang/coco_detection')
>>> example = dataset['train'][500]
>>> print(example)
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x426>,
'bboxes': [
[192.4199981689453, 220.17999267578125,
129.22999572753906, 148.3800048828125],
[76.94000244140625, 146.6300048828125,
104.55000305175781, 109.33000183105469],
[302.8800048828125, 115.2699966430664,
99.11000061035156, 119.2699966430664],
[0.0, 0.800000011920929,
592.5700073242188, 420.25]],
'categories': [46, 46, 46, 55],
'inst.rles': {
'size': [[426, 640], [426, 640], [426, 640], [426, 640]],
'counts': [
'gU`2b0d;...', 'RXP16m<=...', ']Xn34S=4...', 'n:U2o8W2...'
]}}
```
### Visualization
```python
>>> import cv2
>>> import numpy as np
>>> from PIL import Image
>>> def transforms(examples):
inst_rles = examples.pop('inst.rles')
annotation = []
for i in inst_rles:
inst_rles = [
{'size': size, 'counts': counts}
for size, counts in zip(i['size'], i['counts'])
]
annotation.append(maskUtils.decode(inst_rles))
examples['annotation'] = annotation
return examples
>>> def visualize(example, names, colors):
image = np.array(example['image'])
bboxes = np.array(example['bboxes']).round().astype(int)
bboxes[:, 2:] += bboxes[:, :2]
categories = example['categories']
masks = example['annotation']
n = len(bboxes)
for i in range(n):
c = categories[i]
color, name = colors[c], names[c]
cv2.rectangle(image, bboxes[i, :2], bboxes[i, 2:], color.tolist(), 2)
cv2.putText(
image, name, bboxes[i, :2], cv2.FONT_HERSHEY_SIMPLEX,
1, color.tolist(), 2, cv2.LINE_AA, False
)
image[masks[..., i] == 1] = image[masks[..., i] == 1] // 2 + color // 2
return image
>>> dataset.set_transform(transforms)
>>> names = dataset['train'].features['categories'].feature.names
>>> colors = np.ones((80, 3), np.uint8) * 255
>>> colors[:, 0] = np.linspace(0, 255, 80)
>>> colors = cv2.cvtColor(colors[None], cv2.COLOR_HSV2RGB)[0]
>>> example = dataset['train'][500]
>>> Image.fromarray(visualize(example, names, colors))
```
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://cocodataset.org/
- **Repository:** None
- **Paper:** [Microsoft COCO: Common Objects in Context](https://arxiv.org/abs/1405.0312)
- **Leaderboard:** [Papers with Code](https://paperswithcode.com/dataset/coco)
- **Point of Contact:** None
### Dataset Summary
COCO is a large-scale object detection, segmentation, and captioning dataset.
### Supported Tasks and Leaderboards
[Object Detection](https://huggingface.co/tasks/object-detection)
[Image Segmentation](https://huggingface.co/tasks/image-segmentation)
### Languages
en
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"image": PIL.Image(mode="RGB"),
"bboxes": [
[192.4199981689453, 220.17999267578125,
129.22999572753906, 148.3800048828125],
[76.94000244140625, 146.6300048828125,
104.55000305175781, 109.33000183105469],
[302.8800048828125, 115.2699966430664,
99.11000061035156, 119.2699966430664],
[0.0, 0.800000011920929,
592.5700073242188, 420.25]],
"categories": [46, 46, 46, 55],
"inst.rles": {
"size": [[426, 640], [426, 640], [426, 640], [426, 640]],
"counts": ["gU`2b0d;...", "RXP16m<=...", "]Xn34S=4...", "n:U2o8W2..."]
}
}
```
### Data Fields
[More Information Needed]
### Data Splits
| name | train | validation |
| ------- | ------: | ---------: |
| default | 118,287 | 5,000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution 4.0 License
### Citation Information
```
@article{cocodataset,
author = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and Lubomir D. Bourdev and Ross B. Girshick and James Hays and Pietro Perona and Deva Ramanan and Piotr Doll{'{a} }r and C. Lawrence Zitnick},
title = {Microsoft {COCO:} Common Objects in Context},
journal = {CoRR},
volume = {abs/1405.0312},
year = {2014},
url = {http://arxiv.org/abs/1405.0312},
archivePrefix = {arXiv},
eprint = {1405.0312},
timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@github-whyen-wang](https://github.com/whyen-wang) for adding this dataset.