Datasets:
metadata
language:
- pt
size_categories:
- 10K<n<100K
task_categories:
- text-to-image
- image-to-text
- text-generation
pretty_name: nocaps Portuguese Translation
dataset_info:
features:
- name: image
dtype: image
- name: image_coco_url
dtype: string
- name: image_date_captured
dtype: string
- name: image_file_name
dtype: string
- name: image_height
dtype: int32
- name: image_width
dtype: int32
- name: image_id
dtype: int32
- name: image_license
dtype: int8
- name: image_open_images_id
dtype: string
- name: annotations_ids
sequence: int32
- name: annotations_captions
sequence: string
- name: image_domain
dtype: string
splits:
- name: test
num_bytes: 3342886710
num_examples: 10600
- name: validation
num_bytes: 1422203749
num_examples: 4500
download_size: 4761190122
dataset_size: 4765090459
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
🎉 nocaps Dataset Translation for Portuguese Image Captioning
💾 Dataset Summary
nocaps Portuguese Translation, a multimodal dataset for Portuguese image captioning benchmark, each image accompanied by ten descriptive captions that have been generated by human annotators for every individual image. The original English captions were rendered into Portuguese through the utilization of the Google Translator API.
🧑💻 Hot to Get Started with the Dataset
from datasets import load_dataset
dataset = load_dataset('laicsiifes/nocaps-pt-br')
✍️ Languages
The images descriptions in the dataset are in Portuguese.
🧱 Dataset Structure
📝 Data Instances
An example looks like below:
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=732x1024>,
'image_coco_url': 'https://s3.amazonaws.com/nocaps/val/0013ea2087020901.jpg',
'image_date_captured': '2018-11-06 11:04:33',
'image_file_name': '0013ea2087020901.jpg',
'image_height': 1024,
'image_width': 732,
'image_id': 0,
'image_license': 0,
'image_open_images_id': '0013ea2087020901',
'annotations_ids': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
'annotations_captions': [
'Um bebê está parado na frente de uma casa.',
'Uma menina com uma jaqueta branca e sandálias.',
'Uma criança está parada na frente de uma casa.',
'Uma criança está vestindo uma camisa branca e parada em uma calçada lateral.',
'Um garotinho está de fralda e com uma camisa branca.',
'Uma criança usando fralda e sapatos está na calçada.',
'Uma criança veste uma camisa de cor clara durante o dia.',
'Uma criança parada na calçada com uma camisa.',
'Foto em preto e branco de uma menina sorrindo.',
'um bebê fofo está sozinho com camisa branca'
],
'image_domain': 'in-domain'
}
🗃️ Data Fields
The data instances have the following fields:
image
: aPIL.Image.Image
object containing the image.image_coco_url
: astr
containing the URL to the original image.image_date_captured
: astr
representing the date and time when the image was captured.image_file_name
: astr
containing the name of the image file.image_height
: anint
representing the height of the image in pixels.image_width
: anint
representing the width of the image in pixels.image_id
: anint
containing the unique identifier of the image.image_license
: anint
representing the license type of the image.image_open_images_id
: astr
containing the identifier for the image.annotations_ids
: alist
ofint
containing the unique identifiers of the annotations related to the image.annotations_captions
: alist
ofstr
containing the captions describing the image.image_domain
: astr
indicating the domain of the image. It can be:in-domain
,near-domain
orout-of-domain
.
📋 BibTeX entry and citation info
@misc{bromonschenkel2024nocapspt,
title = {nocaps Dataset Translation for Portuguese Image Captioning},
author = {Bromonschenkel, Gabriel and Oliveira, Hil{\'a}rio and Paix{\~a}o, Thiago M.},
howpublished = {\url{https://huggingface.co/datasets/laicsiifes/nocaps-pt-br}},
publisher = {Hugging Face},
year = {2024}
}