File size: 1,625 Bytes
19978ca
5569c13
 
 
 
 
 
 
 
 
 
 
 
19978ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5569c13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0ef0ce3
5569c13
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
language:
- en
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: Flickr30k Captions
tags:
- sentence-transformers
dataset_info:
  config_name: pair
  features:
  - name: caption1
    dtype: string
  - name: caption2
    dtype: string
  splits:
  - name: train
    num_bytes: 21319922
    num_examples: 158881
  download_size: 11450890
  dataset_size: 21319922
configs:
- config_name: pair
  data_files:
  - split: train
    path: pair/train-*
---

# Dataset Card for Flickr30k Captions

This dataset is a collection of caption pairs given to the same image, collected from Flickr30k. See [Flickr30k](https://shannon.cs.illinois.edu/DenotationGraph/) for additional information.
This dataset can be used directly with Sentence Transformers to train embedding models.

Note that two captions for the same image do not strictly have the same semantic meaning. 

## Dataset Subsets

### `pair` subset

* Columns: "caption1", "caption2"
* Column types: `str`, `str`
* Examples:
    ```python
    {
      'caption1': 'A large structure has broken and is laying in a roadway.',
      'caption2': 'A man stands on wooden supports and surveys damage.',
    }
    ```
* Collection strategy: Reading the Flickr30k Captions dataset from [embedding-training-data](https://huggingface.co/datasets/sentence-transformers/embedding-training-data), which has lists of duplicate captions. I've considered all adjacent captions as a positive pair, plus the last and first caption. So, e.g. 5 duplicate captions results in 5 duplicate pairs.
* Deduplified: No