sileod commited on
Commit
54550bf
·
1 Parent(s): 00c156c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -6,6 +6,7 @@ language:
6
  - en
7
  ---
8
  https://github.com/IKMLab/arct2
 
9
  @inproceedings{niven-kao-2019-probing,
10
  title = "Probing Neural Network Comprehension of Natural Language Arguments",
11
  author = "Niven, Timothy and
@@ -18,4 +19,5 @@ https://github.com/IKMLab/arct2
18
  url = "https://www.aclweb.org/anthology/P19-1459",
19
  pages = "4658--4664",
20
  abstract = "We are surprised to find that BERT{'}s peak performance of 77{\%} on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them. This analysis informs the construction of an adversarial dataset on which all models achieve random accuracy. Our adversarial dataset provides a more robust assessment of argument comprehension and should be adopted as the standard in future work.",
21
- }
 
 
6
  - en
7
  ---
8
  https://github.com/IKMLab/arct2
9
+ ```bib
10
  @inproceedings{niven-kao-2019-probing,
11
  title = "Probing Neural Network Comprehension of Natural Language Arguments",
12
  author = "Niven, Timothy and
 
19
  url = "https://www.aclweb.org/anthology/P19-1459",
20
  pages = "4658--4664",
21
  abstract = "We are surprised to find that BERT{'}s peak performance of 77{\%} on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them. This analysis informs the construction of an adversarial dataset on which all models achieve random accuracy. Our adversarial dataset provides a more robust assessment of argument comprehension and should be adopted as the standard in future work.",
22
+ }
23
+ ```