Datasets:
license: mit
configs:
- config_name: static
data_files:
- split: test
path: static.csv
- config_name: temporal
data_files:
- split: test
path: temporal.csv
- config_name: disputable
data_files:
- split: test
path: disputable.csv
task_categories:
- question-answering
language:
- en
pretty_name: DynamicQA
size_categories:
- 10K<n<100K
DYNAMICQA
This is a repository for the paper DYNAMICQA: Tracing Internal Knowledge Conflicts in Language Models accepted at Findings of EMNLP 2024.
Our paper investigates the Language Model's behaviour when the conflicting knowledge exist within the LM's parameters. We present a novel dataset containing inherently conflicting data, DYNAMICQA. Our dataset consists of three partitions, Static, Disputable 🤷♀️, and Temporal 🕰️.
We also evaluate several measures on their ability to reflect the presence of intra-memory conflict: Semantic Entropy and a novel Coherent Persuasion Score. You can find our findings from our paper!
The implementation of the measures is available on our github repo!
Dataset
Our dataset consists of three different partitions.
Partition | Number of Questions |
---|---|
Static | 2500 |
Temporal | 2495 |
Disputable | 694 |
Details
Question : "question" column
Answers : Two different answers are available: one in the "obj" column and the other in the "replace_name" column.
Context : Context ("context" column) is masked with [ENTITY]. Before providing the context to the LM, you should replace [ENTITY] with either "obj" or "replace_name".
Number of edits : "num_edits" column. This denotes Temporality for temporal partition, and Disputability for disputable partition.