Datasets:
File size: 6,177 Bytes
ae19f42 d3a2e42 9b25bf9 d3a2e42 5a5c647 9b25bf9 5a5c647 9b25bf9 341d0fc c404013 341d0fc 19f779d c7feec4 c404013 19f779d c7feec4 c404013 19f779d c7feec4 341d0fc ae19f42 6900bf1 cefc818 b2a7c89 84673ac 2a0f2ca b2a7c89 a3b75dd b2a7c89 a3b75dd b2a7c89 a3b75dd b2a7c89 a3b75dd b2a7c89 84673ac b2a7c89 b94fb4f d1c0453 b2a7c89 a0f43a0 a9b58b3 a0f43a0 c1669e4 919b37d 1999f53 a9b58b3 1999f53 a0f43a0 c1669e4 a9b58b3 1999f53 a9b58b3 6b5a8f4 1999f53 a9b58b3 1999f53 b2a7c89 18ba507 b2a7c89 6b5a8f4 b2a7c89 a3b75dd b2a7c89 6b5a8f4 b2a7c89 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
---
configs:
- config_name: idiom-detection-task
data_files:
- split: test
path: "idiom_detection_task.csv"
- config_name: metaphor-detection-task
data_files:
- split: test
path: "metaphor_detection_task.csv"
- config_name: simile-detection-task
data_files:
- split: test
path: "simile_detection_task.csv"
- config_name: open-simile-detection-task
data_files:
- split: test
path: "open_simile_detection_task.csv"
- config_name: idiom-retrieval-task
data_files:
- split: test
path: "idiom_retrieval_task.csv"
- config_name: metaphor-retrieval-task
data_files:
- split: test
path: "metaphor_retrieval_task.csv"
- config_name: simile-retrieval-task
data_files:
- split: test
path: "simile_retrieval_task.csv"
- config_name: open-simile-retrieval-task
data_files:
- split: test
path: "open_simile_retrieval_task.csv"
- config_name: idioms-dataset
data_files:
- split: dataset
path: "idioms_dataset.csv"
- config_name: similes-dataset
data_files:
- split: dataset
path: "similes_dataset.csv"
- config_name: metaphors-dataset
data_files:
- split: dataset
path: "metaphors_dataset.csv"
license: cc-by-4.0
language:
- en
tags:
- figurative-language
- multimodal-figurative-language
- ' commonsense-reasoning'
- visual-reasoning
size_categories:
- 1K<n<10K
---
# Dataset Card for IRFL
- [Dataset Description](#dataset-description)
- [Leaderboards](#leaderboards)
- [Colab notebook code for IRFL evaluation](#colab-notebook-code-for-irfl-evaluation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
The IRFL dataset consists of idioms, similes, metaphors with matching figurative and literal images, and two novel tasks of multimodal figurative detection and retrieval.
Using human annotation and an automatic pipeline we created, we collected figurative and literal images for textual idioms, metaphors, and similes.
We annotated the relations between these images and the figurative phrase they originated from. We created two novel tasks of figurative detection and retrieval using these images.
The figurative detection task evaluates Vision and Language Pre-Trained Models’ (VL-PTMs) ability to choose the image that best visualizes the meaning of a figurative expression. The task is to choose the image that best visualizes the figurative phrase out of X candidates. The retrieval task examines VL-PTMs' preference for figurative images. In this task, Given a set of figurative and partially literal images, the task is to rank the images using the model-matching score such that the figurative images are ranked higher, and calculate the precision at k, where k is the number of figurative images in the input.
We evaluated state-of-the-art VL models and found that the best models achieved 22%, 30%, and 66% accuracy vs. humans 97%, 99.7%, and 100% on our detection task for idioms, metaphors, and similes respectively. The best model achieved an F1 score of 61 on the retrieval task.
- **Homepage:**
https://irfl-dataset.github.io/
- **Repository:**
https://github.com/irfl-dataset/IRFL
- **Paper:**
https://arxiv.org/abs/2303.15445
- **Leaderboard:**
https://irfl-dataset.github.io/leaderboard
- **Point of Contact:**
[email protected]; [email protected]
### Leaderboards
https://irfl-dataset.github.io/leaderboard
### Colab notebook for IRFL evaluation
https://colab.research.google.com/drive/1RfcUhBTHvREx5X7TMY5UAgMYX8NMKy7u?usp=sharing
### Languages
English.
## Dataset Structure
### Data Fields
★ - refers to idiom-only fields
⁺₊ - refers to metaphor-only fields
Multimodal Figurative Language Detection task
- query (★): the idiom definition the answer image originated from.
- distractors: the distractor images
- answer: the correct image
- figurative_type: idiom | metaphor | simile
- type: the correct image type (Figurative or Figurative+Literal).
- definition (★): list of all the definitions of the idiom
- phrase: the figurative phrase.
Multimodal Figurative Language Retrieval task
- type: the rival categories FvsPL (Figurative images vs. Partial Literal) or FLvsPL (Figurative+Literal images vs. Partial Literal)
- figurative_type: idiom | metaphor | simile
- images_metadata: the metadata of the distractors and answer images.
- first_category: the first category images (Figurative images if FvsPL, Figurative Literal images if FLvsPL)
- second_category: the second category images (Partial Literal)
- definition (★): list of all the definitions of the idiom
- theme (⁺₊): the theme of the partial literal distractor, for example, for the metaphor heart of gold, an image of a "gold bar" and an image of a "human heart" will have different theme value
- phrase: the figurative phrase.
The idioms, metaphor, and similes datasets contain all the figurative phrases, annotated images, and corresponding metadata. <br/>
## Dataset Collection
Using an automatic pipeline we created, we collected figurative and literal images for textual idioms, metaphors, and similes. We annotated the relations between these images and the figurative phrase they originated from.
#### Annotation process
We paid Amazon Mechanical Turk Workers to annotate the relation between each image and phrase (Figurative vs. Literal).
## Considerations for Using the Data
- Idioms: Annotated by five crowdworkers with rigorous qualifications and training.
- Metaphors and Similes: Annotated by three expert team members.
- Detection and Ranking Tasks: Annotated by three crowdworkers not involved in prior IRFL annotations.
### Licensing Information
CC-By 4.0
### Citation Information
@misc{yosef2023irfl,
title={IRFL: Image Recognition of Figurative Language},
author={Ron Yosef and Yonatan Bitton and Dafna Shahaf},
year={2023},
eprint={2303.15445},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |