Datasets:
metadata
dataset_info:
config_name: parquet
features:
- name: text
dtype: string
- name: id
dtype: string
- name: url
dtype: string
- name: caption
dtype: string
- name: width
dtype: int64
- name: height
dtype: int64
- name: mime_type
dtype: string
- name: hash
dtype: string
- name: license
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 8655889565
num_examples: 12249454
download_size: 3647461171
dataset_size: 8655889565
configs:
- config_name: parquet
data_files:
- split: train
path: parquet/train-*
task_categories:
- question-answering
language:
- en
- tr
pretty_name: PD12M Turkish
size_categories:
- 10M<n<100M
license: cdla-permissive-2.0
Translated from English to Tuskish language from: https://huggingface.co/datasets/Spawning/PD12M One of the biggest text-to-image dataset in Turkish language
Metadata
The metadata is made available through a series of parquet files with the following schema:
text
: Translated caption for the image.id
: A unique identifier for the image.url
: The URL of the image.caption
: A caption for the image.width
: The width of the image in pixels.height
: The height of the image in pixels.mime_type
: The MIME type of the image file.hash
: The MD5 hash of the image file.license
: The URL of the image license.source
: The source organization of the image.
Download Images:
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["url"]))
return batch
num_threads = 20
dataset = dataset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})