Datasets:
File size: 2,705 Bytes
929c360 373461f 929c360 5ec3021 373461f 76704fd a06ce17 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
---
dataset_info:
config_name: parquet
features:
- name: text
dtype: string
- name: id
dtype: string
- name: url
dtype: string
- name: caption
dtype: string
- name: width
dtype: int64
- name: height
dtype: int64
- name: mime_type
dtype: string
- name: hash
dtype: string
- name: license
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 8655889565
num_examples: 12249454
download_size: 3647461171
dataset_size: 8655889565
configs:
- config_name: parquet
data_files:
- split: train
path: parquet/train-*
task_categories:
- question-answering
language:
- en
- tr
pretty_name: PD12M Turkish
size_categories:
- 10M<n<100M
license: cdla-permissive-2.0
---
Translated from English to Tuskish language from: https://huggingface.co/datasets/Spawning/PD12M
One of the biggest text-to-image dataset in Turkish language
## Metadata
The metadata is made available through a series of parquet files with the following schema:
- `text`: Translated caption for the image.
- `id`: A unique identifier for the image.
- `url`: The URL of the image.
- `caption`: A caption for the image.
- `width`: The width of the image in pixels.
- `height`: The height of the image in pixels.
- `mime_type`: The MIME type of the image file.
- `hash`: The MD5 hash of the image file.
- `license`: The URL of the image license.
- `source` : The source organization of the image.
## Download Images:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["url"]))
return batch
num_threads = 20
dataset = dataset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
``` |