Datasets:
File size: 1,302 Bytes
04e73fc 19b5e2e 04e73fc 19b5e2e 04e73fc 19b5e2e 04e73fc 19b5e2e 04e73fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
---
language:
- fa
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- image-to-text
pretty_name: Flickr30K Fa
tags:
- hezar
dataset_info:
features:
- name: image_path
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 3417564667.896
num_examples: 29146
- name: test
num_bytes: 376609317.44
num_examples: 3236
download_size: 3780108327
dataset_size: 3794173985.336
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
The Flickr30K dataset filtered and translated to Persian.
This dataset was originally made by **Sajjad Ayoubi** and uploaded to Kaggle at [https://www.kaggle.com/datasets/sajjadayobi360/flickrfa](https://www.kaggle.com/datasets/sajjadayobi360/flickrfa).
This repo contains the exact dataset split to train/test using a custom sampling criteria and can be directly loaded using HuggingFace datasets or right from Hezar.
### Usage
#### Hugging Face Datasets
```
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset("hezarai/flickr30k-fa")
```
#### Hezar
```
pip install hezar
```
```python
from hezar.data import Dataset
dataset = Dataset.load("hezarai/flickr30k-fa", split="train")
``` |