File size: 6,114 Bytes
0277b4b eb4bbfd 0277b4b 6ee3315 0277b4b 6ee3315 0277b4b 6ee3315 0277b4b dc0772f 0277b4b 6ee3315 0277b4b 6472b9b 6ee3315 eb4bbfd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
licence: mit
language:
- fr
size_categories:
- 100K<n<1M
task_categories:
- text-classification
tags:
- textual-entailment
- DFP
- french prompts
annotations_creators:
- found
language_creators:
- found
multilinguality:
- monolingual
source_datasets:
- multilingual-NLI-26lang-2mil7
---
# ling_fr_prompt_textual_entailment
## Summary
**ling_fr_prompt_textual_entailment** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **110,000** rows that can be used for a textual entailment task.
The original data (without prompts) comes from the dataset [multilingual-NLI-26lang-2mil7](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) by Laurer et al. where only the ling French part has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
22 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
"""Prendre l'énoncé suivant comme vrai : " """+premise+""" "\n Alors l'énoncé suivant : " """+hypothesis+""" " est "vrai", "faux", ou "incertain" ?""",
"""Prends l'énoncé suivant comme vrai : " """+premise+""" "\n Alors l'énoncé suivant : " """+hypothesis+""" " est "vrai", "faux", ou "incertain" ?""",
"""Prenez l'énoncé suivant comme vrai : " """+premise+""" "\n Alors l'énoncé suivant : " """+hypothesis+""" " est "vrai", "faux", ou "incertain" ?""",
'"'+premise+'"\nQuestion : Cela implique-t-il que "'+hypothesis+'" ? "vrai", "faux", ou "incertain" ?',
'"'+premise+'"\nQuestion : "'+hypothesis+'" est "vrai", "faux", ou "peut-être" ?',
""" " """+premise+""" "\n D'après le passage précédent, est-il vrai que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nSur la base de ces informations, l'énoncé est-il : " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nEn gardant à l'esprit le texte ci-dessus, considérez : " """+hypothesis+""" "\n Est-ce que c'est "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nEn gardant à l'esprit le texte ci-dessus, considére : " """+hypothesis+""" "\n Est-ce que c'est "vrai", "faux", ou "peut-être" ?""",
""" " """+premise+""" "\nEn utilisant uniquement la description ci-dessus et ce que vous savez du monde, " """+hypothesis+""" " est-ce "vrai", "faux", ou "incertain" ?""",
""" " """+premise+""" "\nEn utilisant uniquement la description ci-dessus et ce que tu sais du monde, " """+hypothesis+""" " est-ce "vrai", "faux", ou "incertain" ?""",
"""Étant donné que " """+premise+""" ", s'ensuit-il que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Étant donné que " """+premise+""" ", est-il garanti que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
'Étant donné '+premise+', doit-on supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Étant donné '+premise+', dois-je supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Sachant que '+premise+', doit-on supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Sachant que '+premise+', dois-je supposer que '+hypothesis+' est "vrai", "faux", ou "incertain" ?',
'Étant donné que '+premise+', il doit donc être vrai que '+hypothesis+' ? "vrai", "faux", ou "incertain" ?',
"""Supposons que " """+premise+""" ", pouvons-nous déduire que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Supposons que " """+premise+""" ", puis-je déduire que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Supposons qu'il est vrai que " """+premise+""" ". Alors, est-ce que " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?""",
"""Supposons qu'il soit vrai que " """+premise+""" ",\n Donc, " """+hypothesis+""" " ? "vrai", "faux", ou "incertain" ?"""
```
### Features used in the prompts
In the prompt list above, `premise`, `hypothesis` and `targets` have been constructed from:
```
moritz = load_dataset('MoritzLaurer/multilingual-NLI-26lang-2mil7')
ling = moritz['fr_ling']
ling['premise'] = list(map(lambda i: i.replace(' . ','. ').replace(' .','. ').replace('( ','(').replace(' )',')').replace(' , ',', ').replace(', ',', ').replace("' ","'"), map(str,ling['premise'])))
ling['hypothesis'] = list(map(lambda x: x.replace(' . ','. ').replace(' .','. ').replace('( ','(').replace(' )',')').replace(' , ',', ').replace(', ',', ').replace("' ","'"), map(str,ling['hypothesis'])))
targets = str(ling['label'][i]).replace("0","vrai").replace("1","incertain").replace("2","faux")
```
# Splits
- `train` with 110,000 samples
- no `valid` split
- no `test` split
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/ling_fr_prompt_textual_entailment")
```
# Citation
## Original data
> @article{laurer_less_2022,
title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}},
url = {https://osf.io/74b8k},
language = {en-us},
urldate = {2022-07-28},
journal = {Preprint},
author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper},
month = jun,
year = {2022},
note = {Publisher: Open Science Framework},
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
MIT |