Datasets:
language:
- en
license: mit
task_categories:
- sentence-similarity
task_ids:
- semantic-similarity-classification
paperswithcode_id: embedding-data/coco_captions
pretty_name: coco_captions
tags:
- paraphrase-mining
Dataset Card for "coco_captions"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://cocodataset.org/#home
- Repository: https://github.com/cocodataset/cocodataset.github.io
- Paper: More Information Needed
- Point of Contact: [email protected]
- Size of downloaded dataset files:
- Size of the generated dataset:
- Total amount of disk used: 6.32 MB
Dataset Summary
COCO is a large-scale object detection, segmentation, and captioning dataset. This repo contains five captions per image; useful for sentence similarity tasks.
Disclaimer: The team releasing COCO did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.
Supported Tasks
- Sentence Transformers training; useful for semantic search and sentence similarity.
Languages
- English.
Dataset Structure
Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
...
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
Usage Example
Install the 🤗 Datasets library with pip install datasets
and load the dataset from the Hub with:
from datasets import load_dataset
dataset = load_dataset("embedding-data/coco_captions")
The dataset is loaded as a DatasetDict
and has the format:
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 82783
})
})
Review an example i
with:
dataset["train"][i]["set"]
Data Instances
Data Splits
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
The annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License
Citation Information
Contributions
Thanks to:
- Tsung-Yi Lin - Google Brain
- Genevieve Patterson - MSR, Trash TV
- Matteo R. - Ronchi Caltech
- Yin Cui - Google
- Michael Maire - TTI-Chicago
- Serge Belongie - Cornell Tech
- Lubomir Bourdev - WaveOne, Inc.
- Ross Girshick - FAIR
- James Hays - Georgia Tech
- Pietro Perona - Caltech
- Deva Ramanan - CMU
- Larry Zitnick - FAIR
- Piotr Dollár - FAIR
for adding this dataset.