|
--- |
|
task_categories: |
|
- text-classification |
|
language: |
|
- en |
|
pretty_name: The ICL consistency test |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
# The ICL consistency test |
|
|
|
This π€ dataset provides data for the [GenBench CBT task 'The ICL consistency test'](https://github.com/GenBench/genbench_cbt/tree/main/src/genbench/tasks/icl_consistency_test). |
|
The ICL consistency test measures the consistency of LLM predictions on the same data points across many different equivalent prompting setups. |
|
The score in the associated metric (Cohen's kappa) can be understood as a measure of a model's prediction consistency in the face of task-irrelevant information. |
|
|
|
For an easy evaluation of any π€ models, we refer to the code provided in the GenBench task. For in-depth information on the task, we refer to the associated |
|
publications ([Weber et al., 2023](https://arxiv.org/abs/2312.04945),[2023](https://aclanthology.org/2023.conll-1.20/)) and the respective GenBench [doc.md](https://github.com/GenBench/genbench_cbt/blob/main/src/genbench/tasks/icl_consistency_test/doc.md). |
|
|
|
Evaluation on the relevant metrics can be done via the _example_evaluation.py_ script in the [GenBench repository](https://github.com/GenBench/genbench_cbt/blob/main/src/genbench/tasks/icl_consistency_test/). |
|
|
|
### Dataset Description |
|
|
|
_Abstract_: The ICL consistency test measures the consistency of LLM predictions on the same data points across many different prompting setups. Different setups are defined by "factors". |
|
On the one hand, factors can be specific attributes of the used prompt (e.g. the number of examples the model is presented with ["n_shots"] or the type of instructions |
|
that were used to wrap a specific datapoint ["Instructions"]). On the other hand, the analysis can also be augmented by factors that are related to the way a model is |
|
evaluated (e.g. whether a model is calibrated) or the type of model that is evaluated (e.g. the number of parameters or instructions tuning). These external factors can |
|
be added to the analysis by using the task.add_factor() method. The output metric is Cohen's kappa for each factor across all different conditions. A kappa value close to |
|
1 indicates that the factors do not change the model prediction, while a factor close to 0 strongly changes model predictions. The ICL consistency test has two subtasks, |
|
one evaluating the ANLI-dataset ([Nie et al., 2019](https://aclanthology.org/N18-1101/)); the other the MNLI-dataset ([Wang et al., 2017](https://aclanthology.org/N18-1101/)). |
|
|
|
_Size_: Each subtask contains 57600 when using the full 600 data_IDs. The user can choose to reduce the number of evaluated data_IDs. |
|
|
|
- **Curated by:** |
|
- resampling and arrangement was done by [Weber et al., 2023](https://arxiv.org/abs/2312.04945),[2023](https://aclanthology.org/2023.conll-1.20/); |
|
- original data were curated by [Nie et al., 2019](https://aclanthology.org/N18-1101/) (ANLI) and [Wang et al., 2017](https://aclanthology.org/N18-1101/) (MNLI); |
|
- templates were curated by [Bach et al., 2022](https://aclanthology.org/2022.acl-demo.9/) (promptsource). |
|
- **Language:** English |
|
|
|
### Dataset Sources (basic links) |
|
|
|
- **Repository:** Data files on [github](https://github.com/LucWeber/icl_consistency_data). |
|
- **Paper:** [Weber et al., 2023](https://arxiv.org/abs/2312.04945),[2023](https://aclanthology.org/2023.conll-1.20/). |
|
- **Demo:** Find pre-implemented code to evaluate any π€ model on [github](https://github.com/GenBench/genbench_cbt/blob/main/src/genbench/tasks/icl_consistency_test/example_evaluation.py). |
|
|
|
## Uses |
|
|
|
In prompting, models are sensitive to task-irrelevant information in their prompt. This test can be used to quantify this sensitivity of any π€ model. The ICL consistency test does this by measuring a model's prediction consistency across many different semantically equivalent prompting setups. |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, |
|
relationships between data points, etc. --> |
|
|
|
[_TBA_] |
|
|
|
## Dataset Creation |
|
|
|
The data is a sample from the [MNLI](https://aclanthology.org/N18-1101/) and [ANLI](https://aclanthology.org/2020.acl-main.441/) datasets as well as prompt templates from [promptsource](https://aclanthology.org/2022.acl-demo.9/). |
|
Please refer to the original publications's documentation for detailed information on dataset creation. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
This dataset contains data from the [MNLI](https://aclanthology.org/N18-1101/) and [ANLI](https://aclanthology.org/2020.acl-main.441/) datasets and adheres to the same biases, risks and limitations. |
|
|
|
### Recommendations |
|
|
|
We identify the following limitations of the consistency test: |
|
|
|
1. The number of factors is limited and does not cover all possible factors that might influence the predictions. We limited ourselves to factors we deem relevant, to ensure fast evaluation. |
|
|
|
2. Currently, the test is only implemented for the ANLI- and MNLI-datasets. |
|
|
|
3. Factors that are external to the dataset but should be considered in the analysis (e.g. _instruction tuning_ or _calibration_) have to be manually added by the user |
|
using the task.add_factor() method (please use the GenBench implementation of the dataset. You can find it on [github](https://github.com/GenBench/genbench_cbt/tree/main/src/genbench/tasks/icl_consistency_test)). |
|
|
|
|
|
## Citation |
|
|
|
This dataset was used in the following publications. If you use it, please consider citing the following references: |
|
|
|
**BibTeX:** |
|
|
|
``` |
|
@inproceedings{weber2023mind, |
|
title={Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning}, |
|
author={Weber, Lucas and Bruni, Elia and Hupkes, Dieuwke}, |
|
booktitle={Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)}, |
|
pages={294--313}, |
|
year={2023} |
|
} |
|
``` |
|
``` |
|
@article{weber2023icl, |
|
title={The ICL Consistency Test}, |
|
author={Weber, Lucas and Bruni, Elia and Hupkes, Dieuwke}, |
|
journal={arXiv preprint arXiv:2312.04945}, |
|
year={2023} |
|
} |
|
``` |
|
|
|
## Dataset Card Authors |
|
[Lucas Weber](https://lucweber.github.io/) |
|
|
|
## Dataset Card Contact |
|
[email protected] |
|
|