Datasets:

Modalities:
Image
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,528 Bytes
900d227
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73b1743
 
900d227
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73b1743
900d227
 
 
 
 
 
73b1743
900d227
 
 
73b1743
 
 
 
 
900d227
 
 
3fe0125
 
 
 
 
 
900d227
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: cc-by-4.0
language:
- en
pretty_name: PhotoChat++
size_categories:
- n<1K
multilinguality:
- monolingual
annotation_creators:
- crowd-sourced
tags:
- multi-modal dialogue
source_datasets:
- PhotoChat
task_ids:
- conversational
task_categories:
- text-to-image
- image-to-text
splits:
- name: train
  num_examples: 968
dataset_size: 968
---
# Dataset Card for PhotoChat++

> 🚨 Disclaimer: All models and datasets are intended for research purposes only.

## Dataset Description
- **Repository:** [Code](https://github.com/passing2961/DribeR)
- **Paper:** [Large Language Models can Share Images, Too!](https://arxiv.org/abs/2310.14804)
- **Point of Contact:** [Young-Jun Lee](mailto:[email protected])

## Dataset Summary
PhotoChat++ is a publicly available multi-modal dialogue dataset, an extended version of [PhotoChat](https://arxiv.org/abs/2108.01453). PhotoChat++ contains six intent labels, a triggering sentence, an image description, and salient information (e.g., “words” or “phrases”) to invoke the image-sharing behavior. The purpose of this dataset is to thoroughly assess the image-sharing capability of LLMs based on humans' internal operating systems. 

## Languages
English

## Dataset Structure

field | type | description
--- | ---  | ---
`dialogue_id` | str | the identifier for the dialogue, containing the original dialogue identifier from PhotoChat
`dialogue` | list of dict | the dialogue, where each dict entry includes {message, share_photo, user_id} (from PhotoChat)
`photo_id` | str | the identifier for the photo (from PhotoChat)
`photo_url` | str | the URL for the photo (from PhotoChat)
`photo_description` | str | the description of the photo (from PhotoChat)
`intents` | list of str | all intents annotated from crowd-sourcing
`trigger_sentences` | list of str | all triggering sentences that invoke the image-sharing behavior, annotated from crowd-sourcing
`image_descriptions` | list of str | all image descriptions annotated from crowd-sourcing, which are different from the `photo_description` field
`salient_information` | list of str | all salient information (e.g., word or phrase) annotated from crowd-sourcing


## Dataset Creation

We create PhotoChat++ dataset via crowd-sourcing. 

## Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2310.14804).

## Limitations

Please refer to the Limitation section in our [paper](https://arxiv.org/abs/2310.14804).

## Recommendations

Since PhotoChat++ is constructed via crowd-sourcing based on dialogues from the PhotoChat dataset, which is licensed under the CC BY 4.0 International license, PhotoChat++ is shared under the same CC BY 4.0 International license. Therefore, following this license, it is possible to use the PhotoChat++ dataset for commercial purposes. However, we strongly recommend using our dataset for academic and research purposes.

## Acknowledgements

This work was supported by a grant from the KAIST-KT joint research project through AI Tech Lab, Institute of Convergence Technology, funded by KT [Project No. G01230605, Development of Task-oriented Persona-based Dialogue Generation Combining Multi-modal Interaction and Knowledge Modeling].

## Citation

Please cite our work if you find the resources in this repository useful:
```
@article{lee2023large,
  title={Large Language Models can Share Images, Too!},
  author={Lee, Young-Jun and Hyeon, Jonghwan and Choi, Ho-Jin},
  journal={arXiv preprint arXiv:2310.14804},
  year={2023}
}
```