IdioTS / README.md
fdelucaf's picture
Update README.md
bb6c0eb verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - text-classification
  - translation
language:
  - en
  - es
pretty_name: IdioTS - Idiomatic Language Test Suite
size_categories:
  - n<1K

Dataset Card for IdioTS - Idiomatic Language Test Suite

This repository includes the dataset for idiom detection and translation proposed in our paper.

The first version of this evaluation dataset was created as part of a Master's thesis in NLP under the title "Idiom detection and translation with conversational LLMs". The dataset has been further curated and improved and is constantly revised by the author.

Dataset Details

Dataset Description

More detailed information about the dataset can be found in our paper.

  • Curated by: Francesca De Luca Fornaciari
  • License: cc-by-nc-sa-4.0

Dataset Sources [optional]

Uses

This dataset is designed for the assessment of conversational LLMs' capabilities to process figurative language, specifically idiomatic expressions at sentence level.

Direct Use

This dataset can be used for the assessment of conversational LLMs on two tasks related with idiomatic language:

Task 1 (monolingual task): idiom detection in an English sentence.

Task 2 (cross-lingual task): sentence translation from English to Spanish.

Out-of-Scope Use

This dataset is not meant to be used for tasks that differ from the ones specified in "Direct Use".

Dataset Structure

Data Instances

[
  {
    "idiom_id": "idi028",
    "idiom": "jump in the deep end",
    "sentence_id": "idi028-sen01-id",
    "sentence_has_idiom": "True",
    "en": "It's great to see you've jumped into the deep end with this new job.",
    "es": "Es genial que te hayas lanzado a la piscina con este nuevo trabajo."
  },
  {
    "idiom_id": "idi028",
    "idiom": "jump in the deep end",
    "sentence_id": "idi028-sen02-di",
    "sentence_has_idiom": "False",
    "en": "After a month of swimming lessons, the children were confident enough to jump into the deep end of the pool.",
    "es": "Después de un mes de clases de natación los niños tenían la confianza suficiente para tirarse a la parte más profunda de la piscina."
  }
]

Data Fields

  • idiom_id (str): Unique ID assigned to the idiomatic expression.
  • idiom (str): Idiomatic expression.
  • sentence_id (str): Unique ID assigned to the sentence. Is composed by the idiom_id + a specific id for the sentence + a suffix indicating whether the sentence is idiomatic ("id") or distractor ("di").
  • sentence_has_idiom (bool): True/False field indicating wether the original English sentence contains an idiom or not.
  • en (str): Original English sentence.
  • es (str): Spanish sentence (translation).

Data Splits

The dataset contains a single split: test.

Dataset Creation

Curation Rationale

This evaluation dataset was designed and curated by human experts with advanced linguistic knowledge, specifically to assess the ability of LLMs to process figurative language at sentence level. With the release of this dataset, we aim to provide a resource for evaluating the capabilities of conversational LLMs to handle the semantic meanings of multi-word expressions and to distinguish between literal and idiomatic meanings of a potentially idiomatic expression (PIE).

Source Data

The sentence dataset is based on an original list of English idioms. This list was curated by the same author as the dataset. The original English idioms are partly derived from real interactions of the author with native English speakers and partly extracted from the following websites: Amigos Ingleses, The idioms, EF English idioms.

Data Collection and Processing

The dataset contains two types of sentences:

  • Idiomatic sentences.
  • Distractor sentences, i.e. plausible, grammatically and syntactically correct sentences containing a set of words that might belong to an idiomatic expression, but in fact are employed in a less common, literal way.

Who are the source data producers?

The idiomatic sentences in the dataset were crafted by a group of native English speakers in the frame of a small-scale crowdsourcing on voluntary basis.

In order to ensure the quality of the generated sentences, the selected collaborators had to fulfil the following requirements:

  • Native English speakers, predominantly of British origins.
  • Demonstrated high linguistic proficiency attaining at least a C1 level.
  • Language professional profile with a linguistic background (English teachers, linguists, translators, and NLP experts).

The task definition was kept as simple as possible. The collaborators were provided with a spreadsheet extracted from the previously compiled list of idioms (containing just the idiom and an empty cell for the sentence, without any additional context) and were simply instructed to select a few of them of their choice and to craft a sentences per chosen idiom. They were asked to produce sentences representative of natural, spontaneous language use by native English speakers, allowing for humorous, personal, or improvised content, provided it resonated authentically with their native speaker experience. An example idiom with its corresponding sentence was included as a model in the email body:

Idiom: "to have bigger fish to fry". Sentence: "I don't have time for your silly stories, I have bigger fish to fry: I have a job interview to prepare for tomorrow!".

The complex task of generating the distractor sentences was undertaken by the authors to ensure both their quality and correctness, while also providing a subtle suggestion of idiomaticity.

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

The dataset does not contain any kind of personal or sensitive information.

Bias, Risks, and Limitations

A concerted effort was made to mitigate gender bias within our newly developed resource. Whenever possible, gender-specific terms were either eliminated or neutralised, a large number of sentences were reformulated adopting a gender neutral first person plural ("we"/"us"), second person singular or plural ("you"), or third person plural ("they"). Since the gender neutralisation is not always possible due to grammatical or syntactical constraints, meticulous attention was devoted to ensuring a representation of feminine and masculine gender terms as balanced as possible throughout the dataset.

No specific measures were taken to mitigate other types of bias that may be present in the data.

Recommendations

[More Information Needed]

Citation [optional]

BibTeX:

@inproceedings{de-luca-fornaciari-etal-2024-hard,
    title = "A Hard Nut to Crack: Idiom Detection with Conversational Large Language Models",
    author = "De Luca Fornaciari, Francesca  and
      Altuna, Bego{\~n}a  and
      Gonzalez-Dios, Itziar  and
      Melero, Maite",
    editor = "Ghosh, Debanjan  and
      Muresan, Smaranda  and
      Feldman, Anna  and
      Chakrabarty, Tuhin  and
      Liu, Emmy",
    booktitle = "Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.figlang-1.5",
    doi = "10.18653/v1/2024.figlang-1.5",
    pages = "35--44",
    abstract = "In this work, we explore idiomatic language processing with Large Language Models (LLMs). We introduce the Idiomatic language Test Suite IdioTS, a dataset of difficult examples specifically designed by language experts to assess the capabilities of LLMs to process figurative language at sentence level. We propose a comprehensive evaluation methodology based on an idiom detection task, where LLMs are prompted with detecting an idiomatic expression in a given English sentence. We present a thorough automatic and manual evaluation of the results and a comprehensive error analysis.",
}

ACL:

Francesca De Luca Fornaciari, Begoña Altuna, Itziar Gonzalez-Dios, and Maite Melero. 2024. A Hard Nut to Crack: Idiom Detection with Conversational Large Language Models. In Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024), pages 35–44, Mexico City, Mexico (Hybrid). Association for Computational Linguistics.

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

Francesca De Luca Fornaciari

Dataset Card Contact

[email protected]