Datasets:
Formats:
json
Sub-tasks:
conversational
Languages:
English
Size:
< 1K
ArXiv:
Tags:
multi-modal dialogue
License:
license: cc-by-4.0 | |
language: | |
- en | |
pretty_name: PhotoChat++ | |
size_categories: | |
- n<1K | |
multilinguality: | |
- monolingual | |
annotation_creators: | |
- crowd-sourced | |
tags: | |
- multi-modal dialogue | |
source_datasets: | |
- PhotoChat | |
task_ids: | |
- conversational | |
task_categories: | |
- text-to-image | |
- image-to-text | |
splits: | |
- name: train | |
num_examples: 968 | |
dataset_size: 968 | |
# Dataset Card for PhotoChat++ | |
> 🚨 Disclaimer: All models and datasets are intended for research purposes only. | |
## Dataset Description | |
- **Repository:** [Code](https://github.com/passing2961/DribeR) | |
- **Paper:** [Large Language Models can Share Images, Too!](https://arxiv.org/abs/2310.14804) | |
- **Point of Contact:** [Young-Jun Lee](mailto:[email protected]) | |
## Dataset Summary | |
PhotoChat++ is a publicly available multi-modal dialogue dataset, an extended version of [PhotoChat](https://arxiv.org/abs/2108.01453). PhotoChat++ contains six intent labels, a triggering sentence, an image description, and salient information (e.g., “words” or “phrases”) to invoke the image-sharing behavior. The purpose of this dataset is to thoroughly assess the image-sharing capability of LLMs based on humans' internal operating systems. | |
## Languages | |
English | |
## Dataset Structure | |
field | type | description | |
--- | --- | --- | |
`dialogue_id` | str | the identifier for the dialogue, containing the original dialogue identifier from PhotoChat | |
`dialogue` | list of dict | the dialogue, where each dict entry includes {message, share_photo, user_id} (from PhotoChat) | |
`photo_id` | str | the identifier for the photo (from PhotoChat) | |
`photo_url` | str | the URL for the photo (from PhotoChat) | |
`photo_description` | str | the description of the photo (from PhotoChat) | |
`intents` | list of str | all intents annotated from crowd-sourcing | |
`trigger_sentences` | list of str | all triggering sentences that invoke the image-sharing behavior, annotated from crowd-sourcing | |
`image_descriptions` | list of str | all image descriptions annotated from crowd-sourcing, which are different from the `photo_description` field | |
`salient_information` | list of str | all salient information (e.g., word or phrase) annotated from crowd-sourcing | |
## Dataset Creation | |
We create PhotoChat++ dataset via crowd-sourcing. | |
## Further Details, Social Impacts, and Limitations | |
Please refer to our [paper](https://arxiv.org/abs/2310.14804). | |
## Limitations | |
Please refer to the Limitation section in our [paper](https://arxiv.org/abs/2310.14804). | |
## Recommendations | |
Since PhotoChat++ is constructed via crowd-sourcing based on dialogues from the PhotoChat dataset, which is licensed under the CC BY 4.0 International license, PhotoChat++ is shared under the same CC BY 4.0 International license. Therefore, following this license, it is possible to use the PhotoChat++ dataset for commercial purposes. However, we strongly recommend using our dataset for academic and research purposes. | |
## Acknowledgements | |
This work was supported by a grant from the KAIST-KT joint research project through AI Tech Lab, Institute of Convergence Technology, funded by KT [Project No. G01230605, Development of Task-oriented Persona-based Dialogue Generation Combining Multi-modal Interaction and Knowledge Modeling]. | |
## Citation | |
Please cite our work if you find the resources in this repository useful: | |
``` | |
@article{lee2023large, | |
title={Large Language Models can Share Images, Too!}, | |
author={Lee, Young-Jun and Hyeon, Jonghwan and Choi, Ho-Jin}, | |
journal={arXiv preprint arXiv:2310.14804}, | |
year={2023} | |
} | |
``` |