entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
nejadgholi-etal-2022-towards
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.18/
Nejadgholi, Isar and Balkir, Esma and Fraser, Kathleen and Kiritchenko, Svetlana
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
225--237
Previous works on the fairness of toxic language classifiers compare the output of models with different identity terms as input features but do not consider the impact of other important concepts present in the context. Here, besides identity terms, we take into account high-level latent features learned by the classifier and investigate the interaction between these features and identity terms. For a multi-class toxic language classifier, we leverage a concept-based explanation framework to calculate the sensitivity of the model to the concept of sentiment, which has been used before as a salient feature for toxic language detection. Our results show that although for some classes, the classifier has learned the sentiment information as expected, this information is outweighed by the influence of identity terms as input features. This work is a step towards evaluating procedural fairness, where unfair processes lead to unfair outcomes. The produced knowledge can guide debiasing techniques to ensure that important concepts besides identity terms are well-represented in training datasets.
null
null
10.18653/v1/2022.blackboxnlp-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,427
inproceedings
ingle-etal-2022-investigating
Investigating the Characteristics of a Transformer in a Few-Shot Setup: Does Freezing Layers in {R}o{BERT}a Help?
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.19/
Ingle, Digvijay and Tripathi, Rishabh and Kumar, Ayush and Patel, Kevin and Vepa, Jithendra
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
238--248
Transformer based language models have been widely adopted by industrial and research organisations in developing machine learning applications in the presence of limited annotated data. While these models show remarkable results, their functioning in few-shot settings is still poorly understood. Hence, we perform an investigative study to understand the characteristics of such models fine-tuned in few-shot setups. Specifically, we compare the intermediate layer representations obtained from a few-shot model and a pre-trained language model. We observe that pre-trained and few-shot models show similar representations over initial layers, whereas the later layers show a stark deviation. Based on these observations, we propose to freeze the initial Transformer layers to fine-tune the model in a constrained text classification setup with $K$ annotated data points per class, where $K$ ranges from 8 to 64. In our experiments across six benchmark sentence classification tasks, we discover that freezing initial 50{\%} Transformer layers not only reduces training time but also surprisingly improves Macro F1 (upto 8{\%}) when compared to fully trainable layers in few-shot setup. We also observe that this idea of layer freezing can very well be generalized to state-of-the-art few-shot text classification techniques, like DNNC and LM-BFF, leading to significant reduction in training time while maintaining comparable performance.
null
null
10.18653/v1/2022.blackboxnlp-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,428
inproceedings
vahtola-etal-2022-easy
It Is Not Easy To Detect Paraphrases: Analysing Semantic Similarity With Antonyms and Negation Using the New {S}em{A}nto{N}eg Benchmark
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.20/
Vahtola, Teemu and Creutz, Mathias and Tiedemann, J{\"org
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
249--262
We investigate to what extent a hundred publicly available, popular neural language models capture meaning systematically. Sentence embeddings obtained from pretrained or fine-tuned language models can be used to perform particular tasks, such as paraphrase detection, semantic textual similarity assessment or natural language inference. Common to all of these tasks is that paraphrastic sentences, that is, sentences that carry (nearly) the same meaning, should have (nearly) the same embeddings regardless of surface form. We demonstrate that performance varies greatly across different language models when a specific type of meaning-preserving transformation is applied: two sentences should be identified as paraphrastic if one of them contains a negated antonym in relation to the other one, such as {\textquotedblleft}I am not guilty{\textquotedblright} versus {\textquotedblleft}I am innocent{\textquotedblright}.We introduce and release SemAntoNeg, a new test suite containing 3152 entries for probing paraphrasticity in sentences incorporating negation and antonyms. Among other things, we show that language models fine-tuned for natural language inference outperform other types of models, especially the ones fine-tuned to produce general-purpose sentence embeddings, on the test suite. Furthermore, we show that most models designed explicitly for paraphrasing are rather mediocre in our task.
null
null
10.18653/v1/2022.blackboxnlp-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,429
inproceedings
malik-johansson-2022-controlling
Controlling for Stereotypes in Multimodal Language Model Evaluation
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.21/
Malik, Manuj and Johansson, Richard
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
263--271
We propose a methodology and design two benchmark sets for measuring to what extent language-and-vision language models use the visual signal in the presence or absence of stereotypes. The first benchmark is designed to test for stereotypical colors of common objects, while the second benchmark considers gender stereotypes. The key idea is to compare predictions when the image conforms to the stereotype to predictions when it does not. Our results show that there is significant variation among multimodal models: the recent Transformer-based FLAVA seems to be more sensitive to the choice of image and less affected by stereotypes than older CNN-based models such as VisualBERT and LXMERT. This effect is more discernible in this type of controlled setting than in traditional evaluations where we do not know whether the model relied on the stereotype or the visual signal.
null
null
10.18653/v1/2022.blackboxnlp-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,430
inproceedings
hosseini-etal-2022-compositional
On the Compositional Generalization Gap of In-Context Learning
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.22/
Hosseini, Arian and Vani, Ankit and Bahdanau, Dzmitry and Sordoni, Alessandro and Courville, Aaron
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
272--280
Pretrained large generative language models have shown great performance on many tasks, but exhibit low compositional generalization abilities. Scaling such models has been shown to improve their performance on various NLP tasks even just by conditioning them on a few examples to solve the task without any fine-tuning (also known as in-context learning). In this work, we look at the gap between the in-distribution (ID) and out-of-distribution (OOD) performance of such models in semantic parsing tasks with in-context learning. In the ID settings, the demonstrations are from the same split ($\textit{test}$ or $\textit{train}$) that the model is being evaluated on, and in the OOD settings, they are from the other split. We look at how the relative generalization gap of in-context learning evolves as models are scaled up. We evaluate four model families, OPT, BLOOM, CodeGen and Codex on three semantic parsing datasets, CFQ, SCAN and GeoQuery with different number of exemplars, and observe a trend of decreasing relative generalization gap as models are scaled up.
null
null
10.18653/v1/2022.blackboxnlp-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,431
inproceedings
amponsah-kaakyire-etal-2022-explaining
Explaining Translationese: why are Neural Classifiers Better and what do they Learn?
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.23/
Amponsah-Kaakyire, Kwabena and Pylypenko, Daria and Genabith, Josef and Espa{\~n}a-Bonet, Cristina
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
281--296
Recent work has shown that neural feature- and representation-learning, e.g. BERT, achieves superior performance over traditional manual feature engineering based approaches, with e.g. SVMs, in translationese classification tasks. Previous research did not show $(i)$ whether the difference is because of the features, the classifiers or both, and $(ii)$ what the neural classifiers actually learn. To address $(i)$, we carefully design experiments that swap features between BERT- and SVM-based classifiers. We show that an SVM fed with BERT representations performs at the level of the best BERT classifiers, while BERT learning and using handcrafted features performs at the level of an SVM using handcrafted features. This shows that the performance differences are due to the features. To address $(ii)$ we use integrated gradients and find that $(a)$ there is indication that information captured by hand-crafted features is only a subset of what BERT learns, and $(b)$ part of BERT`s top performance results are due to BERT learning topic differences and spurious correlations with translationese.
null
null
10.18653/v1/2022.blackboxnlp-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,432
inproceedings
zhang-etal-2022-probing
Probing {GPT}-3`s Linguistic Knowledge on Semantic Tasks
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.24/
Zhang, Lining and Wang, Mengchen and Chen, Liben and Zhang, Wenxin
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
297--304
GPT-3 has attracted much attention from both academia and industry. However, it is still unclear what GPT-3 has understood or learned especially in linguistic knowledge. Some studies have shown linguistic phenomena including negation and tense are hard to be recognized by language models such as BERT. In this study, we conduct probing tasks focusing on semantic information. Specifically, we investigate GPT-3`s linguistic knowledge on semantic tasks to identify tense, the number of subjects, and the number of objects for a given sentence. We also experiment with different prompt designs and temperatures of the decoding method. Our experiment results suggest that GPT-3 has acquired linguistic knowledge to identify certain semantic information in most cases, but still fails when there are some types of disturbance happening in the sentence. We also perform error analysis to summarize some common types of mistakes that GPT-3 has made when dealing with certain semantic information.
null
null
10.18653/v1/2022.blackboxnlp-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,433
inproceedings
jurayj-etal-2022-garden
Garden Path Traversal in {GPT}-2
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.25/
Jurayj, William and Rudman, William and Eickhoff, Carsten
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
305--313
In recent years, large-scale transformer decoders such as the GPT-x family of models have become increasingly popular. Studies examining the behavior of these models tend to focus only on the output of the language modeling head and avoid analysis of the internal states of the transformer decoder. In this study, we present a collection of methods to analyze the hidden states of GPT-2 and use the model`s navigation of garden path sentences as a case study. To enable this, we compile the largest currently available dataset of garden path sentences. We show that Manhattan distances and cosine similarities provide more reliable insights compared to established surprisal methods that analyze next-token probabilities computed by a language modeling head. Using these methods, we find that negating tokens have minimal impacts on the model`s representations for unambiguous forms of sentences with ambiguity solely over what the object of a verb is, but have a more substantial impact of representations for unambiguous sentences whose ambiguity would stem from the voice of a verb. Further, we find that analyzing the decoder model`s hidden states reveals periods of ambiguity that might conclude in a garden path effect but happen not to, whereas surprisal analyses routinely miss this detail.
null
null
10.18653/v1/2022.blackboxnlp-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,434
inproceedings
ban-etal-2022-testing
Testing Pre-trained Language Models' Understanding of Distributivity via Causal Mediation Analysis
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.26/
Ban, Pangbo and Jiang, Yifan and Liu, Tianran and Steinert-Threlkeld, Shane
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
314--324
To what extent do pre-trained language models grasp semantic knowledge regarding the phenomenon of distributivity? In this paper, we introduce DistNLI, a new diagnostic dataset for natural language inference that targets the semantic difference arising from distributivity, and employ the causal mediation analysis framework to quantify the model behavior and explore the underlying mechanism in this semantically-related task. We find that the extent of models' understanding is associated with model size and vocabulary size. We also provide insights into how models encode such high-level semantic knowledge.
null
null
10.18653/v1/2022.blackboxnlp-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,435
inproceedings
niu-etal-2022-using
Using Roark-Hollingshead Distance to Probe {BERT}`s Syntactic Competence
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.27/
Niu, Jingcheng and Lu, Wenjie and Corlett, Eric and Penn, Gerald
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
325--334
Probing BERT`s general ability to reason about syntax is no simple endeavour, primarily because of the uncertainty surrounding how large language models represent syntactic structure. Many prior accounts of BERT`s agility as a syntactic tool (Clark et al., 2013; Lau et al., 2014; Marvin and Linzen, 2018; Chowdhury and Zamparelli, 2018; Warstadt et al., 2019, 2020; Hu et al., 2020) have therefore confined themselves to studying very specific linguistic phenomena, and there has still been no definitive answer as to whether BERT {\textquotedblleft}knows{\textquotedblright} syntax. The advent of perturbed masking (Wu et al., 2020) would then seem to be significant, because this is a parameter-free probing method that directly samples syntactic trees from BERT`s embeddings. These sampled trees outperform a right-branching baseline, thus providing preliminary evidence that BERT`s syntactic competence bests a simple baseline. This baseline is underwhelming, however, and our reappraisal below suggests that this result, too, is inconclusive. We propose RH Probe, an encoder-decoder probing architecture that operates on two probing tasks. We find strong empirical evidence confirming the existence of important syntactic information in BERT, but this information alone appears not to be enough to reproduce syntax in its entirety. Our probe makes crucial use of a conjecture made by Roark and Holling-shead (2008) that a particular lexical annotation that we shall call RH distance is a sufficient encoding of unlabelled binary syntactic trees, and we prove this conjecture.
null
null
10.18653/v1/2022.blackboxnlp-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,436
inproceedings
rassin-etal-2022-dalle
{DALLE}-2 is Seeing Double: Flaws in Word-to-Concept Mapping in {T}ext2{I}mage Models
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.28/
Rassin, Royi and Ravfogel, Shauli and Goldberg, Yoav
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
335--345
We study the way DALLE-2 maps symbols (words) in the prompt to their references (entities or properties of entities in the generated image). We show that in stark contrast to the way human process language, DALLE-2 does not follow the constraint that each word has a single role in the interpretation, and sometimes re-use the same symbol for different purposes. We collect a set of stimuli that reflect the phenomenon: we show that DALLE-2 depicts both senses of nouns with multiple senses at once; and that a given word can modify the properties of two distinct entities in the image, or can be depicted as one object and also modify the properties of another object, creating a semantic leakage of properties between entities. Taken together, our study highlights the differences between DALLE-2 and human language processing and opens an avenue for future study on the inductive biases of text-to-image models.
null
null
10.18653/v1/2022.blackboxnlp-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,437
inproceedings
katakkar-etal-2022-practical
Practical Benefits of Feature Feedback Under Distribution Shift
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.29/
Katakkar, Anurag and Yoo, Clay H. and Wang, Weiqin and Lipton, Zachary and Kaushik, Divyansh
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
346--355
In attempts to develop sample-efficient and interpretable algorithms, researcher have explored myriad mechanisms for collecting and exploiting feature feedback, auxiliary annotations provided for training (but not test) instances that highlight salient evidence. Examples include bounding boxes around objects and salient spans in text. Despite its intuitive appeal, feature feedback has not delivered significant gains in practical problems as assessed on iid holdout sets. However, recent works on counterfactually augmented data suggest an alternative benefit of supplemental annotations, beyond interpretability: lessening sensitivity to spurious patterns and consequently delivering gains in out-of-domain evaluations. We speculate that while existing methods for incorporating feature feedback have delivered negligible in-sample performance gains, they may nevertheless provide out-of-domain benefits. Our experiments addressing sentiment analysis, show that feature feedback methods perform significantly better on various natural out-of-domain datasets despite comparable in-domain evaluations. By contrast, performance on natural language inference remains comparable. Finally, we compare those tasks where feature feedback does (and does not) help.
null
null
10.18653/v1/2022.blackboxnlp-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,438
inproceedings
tang-etal-2022-identifying
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.30/
Tang, Ruixuan and Chen, Hanjie and Ji, Yangfeng
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
356--370
Some recent works observed the instability of post-hoc explanations when input side perturbations are applied to the model. This raises the interest and concern in the stability of post-hoc explanations. However, the remaining question is: is the instability caused by the neural network model or the post-hoc explanation method? This work explores the potential source that leads to unstable post-hoc explanations. To separate the influence from the model, we propose a simple output probability perturbation method. Compared to prior input side perturbation methods, the output probability perturbation method can circumvent the neural model`s potential effect on the explanations and allow the analysis on the explanation method. We evaluate the proposed method with three widely-used post-hoc explanation methods (LIME (Ribeiro et al., 2016), Kernel Shapley (Lundberg and Lee, 2017a), and Sample Shapley (Strumbelj and Kononenko, 2010)). The results demonstrate that the post-hoc methods are stable, barely producing discrepant explanations under output probability perturbations. The observation suggests that neural network models may be the primary source of fragile explanations.
null
null
10.18653/v1/2022.blackboxnlp-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,439
inproceedings
troshin-chirkova-2022-probing
Probing Pretrained Models of Source Codes
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.31/
Troshin, Sergey and Chirkova, Nadezhda
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
371--383
Deep learning models are widely used for solving challenging code processing tasks, such as code generation or code summarization. Traditionally, a specific model architecture was carefully built to solve a particular code processing task. However, recently general pretrained models such as CodeBERT or CodeT5 have been shown to outperform task-specific models in many applications. While pretrained models are known to learn complex patterns from data, they may fail to understand some properties of source code. To test diverse aspects of code understanding, we introduce a set of diagnostic probing tasks. We show that pretrained models of code indeed contain information about code syntactic structure, the notions of identifiers, and namespaces, but they may fail to recognize more complex code properties such as semantic equivalence. We also investigate how probing results are affected by using code-specific pretraining objectives, varying the model size, or finetuning.
null
null
10.18653/v1/2022.blackboxnlp-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,440
inproceedings
schouten-etal-2022-probing
Probing the representations of named entities in Transformer-based Language Models
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.32/
Schouten, Stefan and Bloem, Peter and Vossen, Piek
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
384--393
In this work we analyze the named entity representations learned by Transformer-based language models. We investigate the role entities play in two tasks: a language modeling task, and a sequence classification task. For this purpose we collect a novel news topic classification dataset with 12 topics called RefNews-12. We perform two complementary methods of analysis. First, we use diagnostic models allowing us to quantify to what degree entity information is present in the hidden representations. Second, we perform entity mention substitution to measure how substitute-entities with different properties impact model performance. By controlling for model uncertainty we are able to show that entities are identified, and depending on the task, play a measurable role in the model`s predictions. Additionally, we show that the entities' types alone are not enough to account for this. Finally, we find that the the frequency with which entities occur are important for the masked language modeling task, and that the entities' distributions over topics are important for topic classification.
null
null
10.18653/v1/2022.blackboxnlp-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,441
inproceedings
rozanova-etal-2022-decomposing
Decomposing Natural Logic Inferences for Neural {NLI}
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.33/
Rozanova, Julia and Ferreira, Deborah and Thayaparan, Mokanarangan and Valentino, Marco and Freitas, Andre
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
394--403
In the interest of interpreting neural NLI models and their reasoning strategies, we carry out a systematic probing study which investigates whether these modelscapture the crucial semantic features central to natural logic: monotonicity and concept inclusion. Correctly identifying valid inferences in downward-monotone contexts is a known stumbling block for NLI performance,subsuming linguistic phenomena such as negation scope and generalized quantifiers. To understand this difficulty, we emphasize monotonicity as a property of a context and examine the extent to which models capture relevant monotonicity information in the vector representations which are intermediate to their decision making process. Drawing on the recent advancement of the probing paradigm,we compare the presence of monotonicity features across various models. We find that monotonicity information is notably weak in the representations of popularNLI models which achieve high scores on benchmarks, and observe that previous improvements to these models based on fine-tuning strategies have introduced stronger monotonicity features together with their improved performance on challenge sets.
null
null
10.18653/v1/2022.blackboxnlp-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,442
inproceedings
klubicka-kelleher-2022-probing
Probing with Noise: Unpicking the Warp and Weft of Embeddings
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.34/
Klubicka, Filip and Kelleher, John
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
404--417
Improving our understanding of how information is encoded in vector space can yield valuable interpretability insights. Alongside vector dimensions, we argue that it is possible for the vector norm to also carry linguistic information. We develop a method to test this: an extension of the probing framework which allows for relative intrinsic interpretations of probing results. It relies on introducing noise that ablates information encoded in embeddings, grounded in random baselines and confidence intervals. We apply the method to well-established probing tasks and find evidence that confirms the existence of separate information containers in English GloVe and BERT embeddings. Our correlation analysis aligns with the experimental findings that different encoders use the norm to encode different kinds of information: GloVe stores syntactic and sentence length information in the vector norm, while BERT uses it to encode contextual incongruity.
null
null
10.18653/v1/2022.blackboxnlp-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,443
inproceedings
shinoda-etal-2022-look
Look to the Right: Mitigating Relative Position Bias in Extractive Question Answering
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.35/
Shinoda, Kazutoshi and Sugawara, Saku and Aizawa, Akiko
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
418--425
Extractive question answering (QA) models tend to exploit spurious correlations to make predictions when a training set has unintended biases. This tendency results in models not being generalizable to examples where the correlations do not hold. Determining the spurious correlations QA models can exploit is crucial in building generalizable QA models in real-world applications; moreover, a method needs to be developed that prevents these models from learning the spurious correlations even when a training set is biased. In this study, we discovered that the relative position of an answer, which is defined as the relative distance from an answer span to the closest question-context overlap word, can be exploited by QA models as superficial cues for making predictions. Specifically, we find that when the relative positions in a training set are biased, the performance on examples with relative positions unseen during training is significantly degraded. To mitigate the performance degradation for unseen relative positions, we propose an ensemble-based debiasing method that does not require prior knowledge about the distribution of relative positions. We demonstrate that the proposed method mitigates the models' reliance on relative positions using the biased and full SQuAD dataset. We hope that this study can help enhance the generalization ability of QA models in real-world applications.
null
null
10.18653/v1/2022.blackboxnlp-1.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,444
inproceedings
riley-chiang-2022-continuum
A Continuum of Generation Tasks for Investigating Length Bias and Degenerate Repetition
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.36/
Riley, Darcey and Chiang, David
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
426--440
Language models suffer from various degenerate behaviors. These differ between tasks: machine translation (MT) exhibits length bias, while tasks like story generation exhibit excessive repetition. Recent work has attributed the difference to task constrainedness, but evidence for this claim has always involved many confounding variables. To study this question directly, we introduce a new experimental framework that allows us to smoothly vary task constrainedness, from MT at one end to fully open-ended generation at the other, while keeping all other aspects fixed. We find that: (1) repetition decreases smoothly with constrainedness, explaining the difference in repetition across tasks; (2) length bias surprisingly also decreases with constrainedness, suggesting some other cause for the difference in length bias; (3) across the board, these problems affect the mode, not the whole distribution; (4) the differences cannot be attributed to a change in the entropy of the distribution, since another method of changing the entropy, label smoothing, does not produce the same effect.
null
null
10.18653/v1/2022.blackboxnlp-1.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,445
inproceedings
serikov-etal-2022-universal
Universal and Independent: Multilingual Probing Framework for Exhaustive Model Interpretation and Evaluation
Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.blackboxnlp-1.37/
Serikov, Oleg and Protasov, Vitaly and Voloshina, Ekaterina and Knyazkova, Viktoria and Shavrina, Tatiana
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
441--456
Linguistic analysis of language models is one of the ways to explain and describe their reasoning, weaknesses, and limitations. In the probing part of the model interpretability research, studies concern individual languages as well as individual linguistic structures. The question arises: are the detected regularities linguistically coherent, or on the contrary, do they dissonate at the typological scale? Moreover, the majority of studies address the inherent set of languages and linguistic structures, leaving the actual typological diversity knowledge out of scope. In this paper, we present and apply the GUI-assisted framework allowing us to easily probe massive amounts of languages for all the morphosyntactic features present in the Universal Dependencies data. We show that reflecting the anglo-centric trend in NLP over the past years, most of the regularities revealed in the mBERT model are typical for the western-European languages. Our framework can be integrated with the existing probing toolboxes, model cards, and leaderboards, allowing practitioners to use and share their familiar probing methods to interpret multilingual models. Thus we propose a toolkit to systematize the multilingual flaws in multilingual models, providing a reproducible experimental setup for 104 languages and 80 morphosyntactic features.
null
null
10.18653/v1/2022.blackboxnlp-1.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,446
inproceedings
boissonnet-etal-2022-explainable
Explainable Assessment of Healthcare Articles with {QA}
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.1/
Boissonnet, Alodie and Saeidi, Marzieh and Plachouras, Vassilis and Vlachos, Andreas
Proceedings of the 21st Workshop on Biomedical Language Processing
1--9
The healthcare domain suffers from the spread of poor quality articles on the Internet. While manual efforts exist to check the quality of online healthcare articles, they are not sufficient to assess all those in circulation. Such quality assessment can be automated as a text classification task, however, explanations for the labels are necessary for the users to trust the model predictions. While current explainable systems tackle explanation generation as summarization, we propose a new approach based on question answering (QA) that allows us to generate explanations for multiple criteria using a single model. We show that this QA-based approach is competitive with the current state-of-the-art, and complements summarization-based models for explainable quality assessment. We also introduce a human evaluation protocol more appropriate than automatic metrics for the evaluation of explanation generation models.
null
null
10.18653/v1/2022.bionlp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,448
inproceedings
giorgi-etal-2022-sequence
A sequence-to-sequence approach for document-level relation extraction
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.2/
Giorgi, John and Bader, Gary and Wang, Bo
Proceedings of the 21st Workshop on Biomedical Language Processing
10--25
Motivated by the fact that many relations cross the sentence boundary, there has been increasing interest in document-level relation extraction (DocRE). DocRE requires integrating information within and across sentences, capturing complex interactions between mentions of entities. Most existing methods are pipeline-based, requiring entities as input. However, jointly learning to extract entities and relations can improve performance and be more efficient due to shared parameters and training steps. In this paper, we develop a sequence-to-sequence approach, seq2rel, that can learn the subtasks of DocRE (entity extraction, coreference resolution and relation extraction) end-to-end, replacing a pipeline of task-specific components. Using a simple strategy we call entity hinting, we compare our approach to existing pipeline-based methods on several popular biomedical datasets, in some cases exceeding their performance. We also report the first end-to-end results on these datasets for future comparison. Finally, we demonstrate that, under our model, an end-to-end approach outperforms a pipeline-based approach. Our code, data and trained models are available at \url{https://github.com/johngiorgi/seq2rel}. An online demo is available at \url{https://share.streamlit.io/johngiorgi/seq2rel/main/demo.py}.
null
null
10.18653/v1/2022.bionlp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,449
inproceedings
abaho-etal-2022-position
Position-based Prompting for Health Outcome Generation
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.3/
Abaho, Micheal and Bollegala, Danushka and Williamson, Paula and Dodd, Susanna
Proceedings of the 21st Workshop on Biomedical Language Processing
26--36
Probing factual knowledge in Pre-trained Language Models (PLMs) using prompts has indirectly implied that language models (LMs) can be treated as knowledge bases. To this end, this phenomenon has been effective, especially when these LMs are fine-tuned towards not just data, but also to the style or linguistic pattern of the prompts themselves. We observe that satisfying a particular linguistic pattern in prompts is an unsustainable, time-consuming constraint in the probing task, especially because they are often manually designed and the range of possible prompt template patterns can vary depending on the prompting task. To alleviate this constraint, we propose using a position-attention mechanism to capture positional information of each word in a prompt relative to the mask to be filled, hence avoiding the need to re-construct prompts when the prompts' linguistic pattern changes. Using our approach, we demonstrate the ability of eliciting answers (in a case study on health outcome generation) to not only common prompt templates like Cloze and Prefix but also rare ones too, such as Postfix and Mixed patterns whose masks are respectively at the start and in multiple random places of the prompt. More so, using various biomedical PLMs, our approach consistently outperforms a baseline in which the default PLMs representation is used to predict masked tokens.
null
null
10.18653/v1/2022.bionlp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,450
inproceedings
farzana-etal-2022-say
How You Say It Matters: Measuring the Impact of Verbal Disfluency Tags on Automated Dementia Detection
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.4/
Farzana, Shahla and Deshpande, Ashwin and Parde, Natalie
Proceedings of the 21st Workshop on Biomedical Language Processing
37--48
Automatic speech recognition (ASR) systems usually incorporate postprocessing mechanisms to remove disfluencies, facilitating the generation of clear, fluent transcripts that are conducive to many downstream NLP tasks. However, verbal disfluencies have proved to be predictive of dementia status, although little is known about how various types of verbal disfluencies, nor automatically detected disfluencies, affect predictive performance. We experiment with an off-the-shelf disfluency annotator to tag disfluencies in speech transcripts for a well-known cognitive health assessment task. We evaluate the performance of this model on detecting repetitions and corrections or retracing, and measure the influence of gold annotated versus automatically detected verbal disfluencies on dementia detection through a series of experiments. We find that removing both gold and automatically-detected disfluencies negatively impacts dementia detection performance, degrading classification accuracy by 5.6{\%} and 3{\%} respectively
null
null
10.18653/v1/2022.bionlp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,451
inproceedings
soleimani-etal-2022-zero
Zero-Shot Aspect-Based Scientific Document Summarization using Self-Supervised Pre-training
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.5/
Soleimani, Amir and Nikoulina, Vassilina and Favre, Benoit and Ait Mokhtar, Salah
Proceedings of the 21st Workshop on Biomedical Language Processing
49--62
We study the zero-shot setting for the aspect-based scientific document summarization task. Summarizing scientific documents with respect to an aspect can remarkably improve document assistance systems and readers experience. However, existing large-scale datasets contain a limited variety of aspects, causing summarization models to over-fit to a small set of aspects and a specific domain. We establish baseline results in zero-shot performance (over unseen aspects and the presence of domain shift), paraphrasing, leave-one-out, and limited supervised samples experimental setups. We propose a self-supervised pre-training approach to enhance the zero-shot performance. We leverage the PubMed structured abstracts to create a biomedical aspect-based summarization dataset. Experimental results on the PubMed and FacetSum aspect-based datasets show promising performance when the model is pre-trained using unlabelled in-domain data.
null
null
10.18653/v1/2022.bionlp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,452
inproceedings
pappas-etal-2022-data
Data Augmentation for Biomedical Factoid Question Answering
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.6/
Pappas, Dimitris and Malakasiotis, Prodromos and Androutsopoulos, Ion
Proceedings of the 21st Workshop on Biomedical Language Processing
63--81
We study the effect of seven data augmentation (DA) methods in factoid question answering, focusing on the biomedical domain, where obtaining training instances is particularly difficult. We experiment with data from the BIOASQ challenge, which we augment with training instances obtained from an artificial biomedical machine reading comprehension dataset, or via back-translation, information retrieval, word substitution based on WORD2VEC embeddings, or masked language modeling, question generation, or extending the given passage with additional context. We show that DA can lead to very significant performance gains, even when using large pre-trained Transformers, contributing to a broader discussion of if/when DA benefits large pre-trained models. One of the simplest DA methods, WORD2VEC-based word substitution, performed best and is recommended. We release our artificial training instances and code.
null
null
10.18653/v1/2022.bionlp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,453
inproceedings
papanikolaou-etal-2022-slot
Slot Filling for Biomedical Information Extraction
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.7/
Papanikolaou, Yannis and Staib, Marlene and Grace, Justin Joshua and Bennett, Francine
Proceedings of the 21st Workshop on Biomedical Language Processing
82--90
Information Extraction (IE) from text refers to the task of extracting structured knowledge from unstructured text. The task typically consists of a series of sub-tasks such as Named Entity Recognition and Relation Extraction. Sourcing entity and relation type specific training data is a major bottleneck in domains with limited resources such as biomedicine. In this work we present a slot filling approach to the task of biomedical IE, effectively replacing the need for entity and relation-specific training data, allowing us to deal with zero-shot settings. We follow the recently proposed paradigm of coupling a Tranformer-based bi-encoder, Dense Passage Retrieval, with a Transformer-based reading comprehension model to extract relations from biomedical text. We assemble a biomedical slot filling dataset for both retrieval and reading comprehension and conduct a series of experiments demonstrating that our approach outperforms a number of simpler baselines. We also evaluate our approach end-to-end for standard as well as zero-shot settings. Our work provides a fresh perspective on how to solve biomedical IE tasks, in the absence of relevant training data. Our code, models and datasets are available at \url{https://github.com/tba}.
null
null
10.18653/v1/2022.bionlp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,454
inproceedings
zeng-etal-2022-automatic
Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.8/
Zeng, Sihang and Yuan, Zheng and Yu, Sheng
Proceedings of the 21st Workshop on Biomedical Language Processing
91--96
Term clustering is important in biomedical knowledge graph construction. Using similarities between terms embedding is helpful for term clustering. State-of-the-art term embeddings leverage pretrained language models to encode terms, and use synonyms and relation knowledge from knowledge graphs to guide contrastive learning. These embeddings provide close embeddings for terms belonging to the same concept. However, from our probing experiments, these embeddings are not sensitive to minor textual differences which leads to failure for biomedical term clustering. To alleviate this problem, we adjust the sampling strategy in pretraining term embeddings by providing dynamic hard positive and negative samples during contrastive learning to learn fine-grained representations which result in better biomedical term clustering. We name our proposed method as CODER++, and it has been applied in clustering biomedical concepts in the newly released Biomedical Knowledge Graph named BIOS.
null
null
10.18653/v1/2022.bionlp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,455
inproceedings
yuan-etal-2022-biobart
{B}io{BART}: Pretraining and Evaluation of A Biomedical Generative Language Model
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.9/
Yuan, Hongyi and Yuan, Zheng and Gan, Ruyi and Zhang, Jiaxing and Xie, Yutao and Yu, Sheng
Proceedings of the 21st Workshop on Biomedical Language Processing
97--109
Pretrained language models have served as important backbones for natural language processing. Recently, in-domain pretraining has been shown to benefit various domain-specific downstream tasks. In the biomedical domain, natural language generation (NLG) tasks are of critical importance, while understudied. Approaching natural language understanding (NLU) tasks as NLG achieves satisfying performance in the general domain through constrained language generation or language prompting. We emphasize the lack of in-domain generative language models and the unsystematic generative downstream benchmarks in the biomedical domain, hindering the development of the research community. In this work, we introduce the generative language model BioBART that adapts BART to the biomedical domain. We collate various biomedical language generation tasks including dialogue, summarization, entity linking, and named entity recognition. BioBART pretrained on PubMed abstracts has enhanced performance compared to BART and set strong baselines on several tasks. Furthermore, we conduct ablation studies on the pretraining tasks for BioBART and find that sentence permutation has negative effects on downstream tasks.
null
null
10.18653/v1/2022.bionlp-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,456
inproceedings
naseem-etal-2022-incorporating
Incorporating Medical Knowledge to Transformer-based Language Models for Medical Dialogue Generation
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.10/
Naseem, Usman and Bandi, Ajay and Raza, Shaina and Rashid, Junaid and Chakravarthi, Bharathi Raja
Proceedings of the 21st Workshop on Biomedical Language Processing
110--115
Medical dialogue systems have the potential to assist doctors in expanding access to medical care, improving the quality of patient experiences, and lowering medical expenses. The computational methods are still in their early stages and are not ready for widespread application despite their great potential. Existing transformer-based language models have shown promising results but lack domain-specific knowledge. However, to diagnose like doctors, an automatic medical diagnosis necessitates more stringent requirements for the rationality of the dialogue in the context of relevant knowledge. In this study, we propose a new method that addresses the challenges of medical dialogue generation by incorporating medical knowledge into transformer-based language models. We present a method that leverages an external medical knowledge graph and injects triples as domain knowledge into the utterances. Automatic and human evaluation on a publicly available dataset demonstrates that incorporating medical knowledge outperforms several state-of-the-art baseline methods.
null
null
10.18653/v1/2022.bionlp-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,457
inproceedings
yan-2022-memory
Memory-aligned Knowledge Graph for Clinically Accurate Radiology Image Report Generation
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.11/
Yan, Sixing
Proceedings of the 21st Workshop on Biomedical Language Processing
116--122
Automatic generating the clinically accurate radiology report from X-ray images is important but challenging. The identification of multi-grained abnormal regions in image and corresponding abnormalities is difficult for data-driven neural models. In this work, we introduce a Memory-aligned Knowledge Graph (MaKG) of clinical abnormalities to better learn the visual patterns of abnormalities and their relationships by integrating it into a deep model architecture for the report generation. We carry out extensive experiments and show that the proposed MaKG deep model can improve the clinical accuracy of the generated reports.
null
null
10.18653/v1/2022.bionlp-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,458
inproceedings
phan-nguyen-2022-simple
Simple Semantic-based Data Augmentation for Named Entity Recognition in Biomedical Texts
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.12/
Phan, Uyen and Nguyen, Nhung
Proceedings of the 21st Workshop on Biomedical Language Processing
123--129
Data augmentation is important in addressing data sparsity and low resources in NLP. Unlike data augmentation for other tasks such as sentence-level and sentence-pair ones, data augmentation for named entity recognition (NER) requires preserving the semantic of entities. To that end, in this paper we propose a simple semantic-based data augmentation method for biomedical NER. Our method leverages semantic information from pre-trained language models for both entity-level and sentence-level. Experimental results on two datasets: i2b2-2010 (English) and VietBioNER (Vietnamese) showed that the proposed method could improve NER performance.
null
null
10.18653/v1/2022.bionlp-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,459
inproceedings
watanabe-etal-2022-auxiliary
Auxiliary Learning for Named Entity Recognition with Multiple Auxiliary Biomedical Training Data
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.13/
Watanabe, Taiki and Ichikawa, Tomoya and Tamura, Akihiro and Iwakura, Tomoya and Ma, Chunpeng and Kato, Tsuneo
Proceedings of the 21st Workshop on Biomedical Language Processing
130--139
Named entity recognition (NER) is one of the elemental technologies, which has been used for knowledge extraction from biomedical text. As one of the NER improvement approaches, multi-task learning that learns a model from multiple training data has been used. Among multi-task learning, an auxiliary learning method, which uses an auxiliary task for improving its target task, has shown higher NER performance than conventional multi-task learning for improving all the tasks simultaneously by using only one auxiliary task in the auxiliary learning. We propose Multiple Utilization of NER Corpora Helpful for Auxiliary BLESsing (MUNCH ABLES). MUNCHABLES utilizes multiple training datasets as auxiliary training data by the following methods; the first one is to finetune the NER model of the target task by sequentially performing auxiliary learning for each auxiliary training dataset, and the other is to use all training datasets in one auxiliary learning. We evaluate MUNCHABLES on eight biomedical-related domain NER tasks, where seven training datasets are used as auxiliary training data. The experiment results show that MUNCHABLES achieves higher accuracy than conventional multi-task learning methods on average while showing state-of-the-art accuracy.
null
null
10.18653/v1/2022.bionlp-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,460
inproceedings
cahyawijaya-etal-2022-snp2vec
{SNP}2{V}ec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.14/
Cahyawijaya, Samuel and Yu, Tiezheng and Liu, Zihan and Zhou, Xiaopu and Mak, Tze Wing Tiffany and Ip, Yuk Yu Nancy and Fung, Pascale
Proceedings of the 21st Workshop on Biomedical Language Processing
140--154
Self-supervised pre-training methods have brought remarkable breakthroughs in the understanding of text, image, and speech. Recent developments in genomics has also adopted these pre-training methods for genome understanding. However, they focus only on understanding haploid sequences, which hinders their applicability towards understanding genetic variations, also known as single nucleotide polymorphisms (SNPs), which is crucial for genome-wide association study. In this paper, we introduce SNP2Vec, a scalable self-supervised pre-training approach for understanding SNP. We apply SNP2Vec to perform long-sequence genomics modeling, and we evaluate the effectiveness of our approach on predicting Alzheimer`s disease risk in a Chinese cohort. Our approach significantly outperforms existing polygenic risk score methods and all other baselines, including the model that is trained entirely with haploid sequences.
null
null
10.18653/v1/2022.bionlp-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,461
inproceedings
khandelwal-etal-2022-biomedical
Biomedical {NER} using Novel Schema and Distant Supervision
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.15/
Khandelwal, Anshita and Kar, Alok and Chikka, Veera Raghavendra and Karlapalem, Kamalakar
Proceedings of the 21st Workshop on Biomedical Language Processing
155--160
Biomedical Named Entity Recognition (BMNER) is one of the most important tasks in the field of biomedical text mining. Most work so far on this task has not focused on identification of discontinuous and overlapping entities, even though they are present in significant fractions in real-life biomedical datasets. In this paper, we introduce a novel annotation schema to capture complex entities, and explore the effects of distant supervision on our deep-learning sequence labelling model. For BMNER task, our annotation schema outperforms other BIO-based annotation schemes on the same model. We also achieve higher F1-scores than state-of-the-art models on multiple corpora without fine-tuning embeddings, highlighting the efficacy of neural feature extraction using our model.
null
null
10.18653/v1/2022.bionlp-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,462
inproceedings
iinuma-etal-2022-improving
Improving Supervised Drug-Protein Relation Extraction with Distantly Supervised Models
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.16/
Iinuma, Naoki and Miwa, Makoto and Sasaki, Yutaka
Proceedings of the 21st Workshop on Biomedical Language Processing
161--170
This paper proposes novel drug-protein relation extraction models that indirectly utilize distant supervision data. Concretely, instead of adding distant supervision data to the manually annotated training data, our models incorporate distantly supervised models that are relation extraction models trained with distant supervision data. Distantly supervised learning has been proposed to generate a large amount of pseudo-training data at low cost. However, there is still a problem of low prediction performance due to the inclusion of mislabeled data. Therefore, several methods have been proposed to suppress the effects of noisy cases by utilizing some manually annotated training data. However, their performance is lower than that of supervised learning on manually annotated data because mislabeled data that cannot be fully suppressed becomes noise when training the model. To overcome this issue, our methods indirectly utilize distant supervision data with manually annotated training data. The experimental results on the DrugProt corpus in the BioCreative VII Track 1 showed that our proposed model can consistently improve the supervised models in different settings.
null
null
10.18653/v1/2022.bionlp-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,463
inproceedings
trieu-etal-2022-named
Named Entity Recognition for Cancer Immunology Research Using Distant Supervision
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.17/
Trieu, Hai-Long and Miwa, Makoto and Ananiadou, Sophia
Proceedings of the 21st Workshop on Biomedical Language Processing
171--177
Cancer immunology research involves several important cell and protein factors. Extracting the information of such cells and proteins and the interactions between them from text are crucial in text mining for cancer immunology research. However, there are few available datasets for these entities, and the amount of annotated documents is not sufficient compared with other major named entity types. In this work, we introduce our automatically annotated dataset of key named entities, i.e., T-cells, cytokines, and transcription factors, which engages the recent cancer immunotherapy. The entities are annotated based on the UniProtKB knowledge base using dictionary matching. We build a neural named entity recognition (NER) model to be trained on this dataset and evaluate it on a manually-annotated data. Experimental results show that we can achieve a promising NER performance even though our data is automatically annotated. Our dataset also enhances the NER performance when combined with existing data, especially gaining improvement in yet investigated named entities such as cytokines and transcription factors.
null
null
10.18653/v1/2022.bionlp-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,464
inproceedings
witte-cimiano-2022-intra
Intra-Template Entity Compatibility based Slot-Filling for Clinical Trial Information Extraction
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.18/
Witte, Christian and Cimiano, Philipp
Proceedings of the 21st Workshop on Biomedical Language Processing
178--192
We present a deep learning based information extraction system that can extract the design and results of a published abstract describing a Randomized Controlled Trial (RCT). In contrast to other approaches, our system does not regard the PICO elements as flat objects or labels but as structured objects. We thus model the task as the one of filling a set of templates and slots; our two-step approach recognizes relevant slot candidates as a first step and assigns them to a corresponding template as second step, relying on a learned pairwise scoring function that models the compatibility of the different slot values. We evaluate the approach on a dataset of 211 manually annotated abstracts for type 2 Diabetes and Glaucoma, showing the positive impact of modelling intra-template entity compatibility. As main benefit, our approach yields a structured object for every RCT abstract that supports the aggregation and summarization of clinical trial results across published studies and can facilitate the task of creating a systematic review or meta-analysis.
null
null
10.18653/v1/2022.bionlp-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,465
inproceedings
carrino-etal-2022-pretrained
Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.19/
Carrino, Casimiro Pio and Llop, Joan and P{\`a}mies, Marc and Guti{\'e}rrez-Fandi{\~n}o, Asier and Armengol-Estap{\'e}, Jordi and Silveira-Ocampo, Joaqu{\'i}n and Valencia, Alfonso and Gonzalez-Agirre, Aitor and Villegas, Marta
Proceedings of the 21st Workshop on Biomedical Language Processing
193--199
This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.
null
null
10.18653/v1/2022.bionlp-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,466
inproceedings
amin-etal-2022-shot
Few-Shot Cross-lingual Transfer for Coarse-grained De-identification of Code-Mixed Clinical Texts
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.20/
Amin, Saadullah and Pokaratsiri Goldstein, Noon and Wixted, Morgan and Garcia-Rudolph, Alejandro and Mart{\'i}nez-Costa, Catalina and Neumann, Guenter
Proceedings of the 21st Workshop on Biomedical Language Processing
200--211
Despite the advances in digital healthcare systems offering curated structured knowledge, much of the critical information still lies in large volumes of unlabeled and unstructured clinical texts. These texts, which often contain protected health information (PHI), are exposed to information extraction tools for downstream applications, risking patient identification. Existing works in de-identification rely on using large-scale annotated corpora in English, which often are not suitable in real-world multilingual settings. Pre-trained language models (LM) have shown great potential for cross-lingual transfer in low-resource settings. In this work, we empirically show the few-shot cross-lingual transfer property of LMs for named entity recognition (NER) and apply it to solve a low-resource and real-world challenge of code-mixed (Spanish-Catalan) clinical notes de-identification in the stroke domain. We annotate a gold evaluation dataset to assess few-shot setting performance where we only use a few hundred labeled examples for training. Our model improves the zero-shot F1-score from 73.7{\%} to 91.2{\%} on the gold evaluation set when adapting Multilingual BERT (mBERT) (CITATION) from the MEDDOCAN (CITATION) corpus with our few-shot cross-lingual target corpus. When generalized to an out-of-sample test set, the best model achieves a human-evaluation F1-score of 97.2{\%}.
null
null
10.18653/v1/2022.bionlp-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,467
inproceedings
li-etal-2022-vpai
{VPAI}{\_}{L}ab at {M}ed{V}id{QA} 2022: A Two-Stage Cross-modal Fusion Method for Medical Instructional Video Classification
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.21/
Li, Bin and Weng, Yixuan and Xia, Fei and Sun, Bin and Li, Shutao
Proceedings of the 21st Workshop on Biomedical Language Processing
212--219
This paper introduces the approach of VPAI{\_}Lab team`s experiments on BioNLP 2022 shared task 1 Medical Video Classification (MedVidCL). Given an input video, the MedVidCL task aims to correctly classify it into one of three following categories: Medical Instructional, Medical Non-instructional, and Non-medical. Inspired by its dataset construction process, we divide the classification process into two stages. The first stage is to classify videos into medical videos and non-medical videos. In the second stage, for those samples classified as medical videos, we further classify them into instructional videos and non-instructional videos. In addition, we also propose the cross-modal fusion method to solve the video classification, such as fusing the text features (question and subtitles) from the pre-training language models and visual features from image frames. Specifically, we use textual information to concatenate and query the visual information for obtaining better feature representation. Extensive experiments show that the proposed method significantly outperforms the official baseline method by 15.4{\%} in the F1 score, which shows its effectiveness. Finally, the online results show that our method ranks the Top-1 on the online unseen test set. All the experimental codes are open-sourced at \url{https://github.com/Lireanstar/MedVidCL}.
null
null
10.18653/v1/2022.bionlp-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,468
inproceedings
bishop-etal-2022-gencomparesum
{G}en{C}ompare{S}um: a hybrid unsupervised summarization method using salience
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.22/
Bishop, Jennifer and Xie, Qianqian and Ananiadou, Sophia
Proceedings of the 21st Workshop on Biomedical Language Processing
220--240
Text summarization (TS) is an important NLP task. Pre-trained Language Models (PLMs) have been used to improve the performance of TS. However, PLMs are limited by their need of labelled training data and by their attention mechanism, which often makes them unsuitable for use on long documents. To this end, we propose a hybrid, unsupervised, abstractive-extractive approach, in which we walk through a document, generating salient textual fragments representing its key points. We then select the most important sentences of the document by choosing the most similar sentences to the generated texts, calculated using BERTScore. We evaluate the efficacy of generating and using salient textual fragments to guide extractive summarization on documents from the biomedical and general scientific domains. We compare the performance between long and short documents using different generative text models, which are finetuned to generate relevant queries or document titles. We show that our hybrid approach out-performs existing unsupervised methods, as well as state-of-the-art supervised methods, despite not needing a vast amount of labelled training data.
null
null
10.18653/v1/2022.bionlp-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,469
inproceedings
singha-roy-mercer-2022-biocite
{B}io{C}ite: A Deep Learning-based Citation Linkage Framework for Biomedical Research Articles
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.23/
Singha Roy, Sudipta and Mercer, Robert E.
Proceedings of the 21st Workshop on Biomedical Language Processing
241--251
Research papers reflect scientific advances. Citations are widely used in research publications to support the new findings and show their benefits, while also regulating the information flow to make the contents clearer for the audience. A citation in a research article refers to the information`s source, but not the specific text span from that source article. In biomedical research articles, this task is challenging as the same chemical or biological component can be represented in multiple ways in different papers from various domains. This paper suggests a mechanism for linking citing sentences in a publication with cited sentences in referenced sources. The framework presented here pairs the citing sentence with all of the sentences in the reference text, and then tries to retrieve the semantically equivalent pairs. These semantically related sentences from the reference paper are chosen as the cited statements. This effort involves designing a citation linkage framework utilizing sequential and tree-structured siamese deep learning models. This paper also provides a method to create a synthetic corpus for such a task.
null
null
10.18653/v1/2022.bionlp-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,470
inproceedings
liang-etal-2022-low
Low Resource Causal Event Detection from Biomedical Literature
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.24/
Liang, Zhengzhong and Noriega-Atala, Enrique and Morrison, Clayton and Surdeanu, Mihai
Proceedings of the 21st Workshop on Biomedical Language Processing
252--263
Recognizing causal precedence relations among the chemical interactions in biomedical literature is crucial to understanding the underlying biological mechanisms. However, detecting such causal relation can be hard because: (1) many times, such causal relations among events are not explicitly expressed by certain phrases but implicitly implied by very diverse expressions in the text, and (2) annotating such causal relation detection datasets requires considerable expert knowledge and effort. In this paper, we propose a strategy to address both challenges by training neural models with in-domain pre-training and knowledge distillation. We show that, by using very limited amount of labeled data, and sufficient amount of unlabeled data, the neural models outperform previous baselines on the causal precedence detection task, and are ten times faster at inference compared to the BERT base model.
null
null
10.18653/v1/2022.bionlp-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,471
inproceedings
gupta-demner-fushman-2022-overview
Overview of the {M}ed{V}id{QA} 2022 Shared Task on Medical Video Question-Answering
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.25/
Gupta, Deepak and Demner-Fushman, Dina
Proceedings of the 21st Workshop on Biomedical Language Processing
264--274
In this paper, we present an overview of the MedVidQA 2022 shared task, collocated with the 21st BioNLP workshop at ACL 2022. The shared task addressed two of the challenges faced by medical video question answering: (I) a video classification task that explores new approaches to medical video understanding (labeling), and (ii) a visual answer localization task. Visual answer localization refers to the identification of the relevant temporal segments (start and end timestamps) in the video where the answer to the medical question is being shown or illustrated. A total of thirteen teams participated in the shared task challenges, with eleven system descriptions submitted to the workshop. The descriptions present monomodal and multi-modal approaches developed for medical video classification and visual answer localization. This paper describes the tasks, the datasets, evaluation metrics, and baseline systems for both tasks. Finally, the paper summarizes the techniques and results of the evaluation of the various approaches explored by the participating teams.
null
null
10.18653/v1/2022.bionlp-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,472
inproceedings
richie-etal-2022-inter
Inter-annotator agreement is not the ceiling of machine learning performance: Evidence from a comprehensive set of simulations
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.26/
Richie, Russell and Grover, Sachin and Tsui, Fuchiang (Rich)
Proceedings of the 21st Workshop on Biomedical Language Processing
275--284
It is commonly claimed that inter-annotator agreement (IAA) is the ceiling of machine learning (ML) performance, i.e., that the agreement between an ML system`s predictions and an annotator can not be higher than the agreement between two annotators. Although Boguslav {\&} Cohen (2017) showed that this claim is falsified by many real-world ML systems, the claim has persisted. As a complement to this real-world evidence, we conducted a comprehensive set of simulations, and show that an ML model can beat IAA even if (and especially if) annotators are noisy and differ in their underlying classification functions, as long as the ML model is reasonably well-specified. Although the latter condition has long been elusive, leading ML models to underperform IAA, we anticipate that this condition will be increasingly met in the era of big data and deep learning. Our work has implications for (1) maximizing the value of machine learning, (2) adherence to ethical standards in computing, and (3) economical use of annotated resources, which is paramount in settings where annotation is especially expensive, like biomedical natural language processing.
null
null
10.18653/v1/2022.bionlp-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,473
inproceedings
das-etal-2022-conversational
Conversational Bots for Psychotherapy: A Study of Generative Transformer Models Using Domain-specific Dialogues
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.27/
Das, Avisha and Selek, Salih and Warner, Alia R. and Zuo, Xu and Hu, Yan and Kuttichi Keloth, Vipina and Li, Jianfu and Zheng, W. Jim and Xu, Hua
Proceedings of the 21st Workshop on Biomedical Language Processing
285--297
Conversational bots have become non-traditional methods for therapy among individuals suffering from psychological illnesses. Leveraging deep neural generative language models, we propose a deep trainable neural conversational model for therapy-oriented response generation. We leverage transfer learning methods during training on therapy and counseling based data from Reddit and AlexanderStreet. This was done to adapt existing generative models {--} GPT2 and DialoGPT {--} to the task of automated dialog generation. Through quantitative evaluation of the linguistic quality, we observe that the dialog generation model - DialoGPT (345M) with transfer learning on video data attains scores similar to a human response baseline. However, human evaluation of responses by conversational bots show mostly signs of generic advice or information sharing instead of therapeutic interaction.
null
null
10.18653/v1/2022.bionlp-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,474
inproceedings
wang-etal-2022-beeds
{BEEDS}: Large-Scale Biomedical Event Extraction using Distant Supervision and Question Answering
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.28/
Wang, Xing David and Leser, Ulf and Weber, Leon
Proceedings of the 21st Workshop on Biomedical Language Processing
298--309
Automatic extraction of event structures from text is a promising way to extract important facts from the evergrowing amount of biomedical literature. We propose BEEDS, a new approach on how to mine event structures from PubMed based on a question-answering paradigm. Using a three-step pipeline comprising a document retriever, a document reader, and an entity normalizer, BEEDS is able to fully automatically extract event triples involving a query protein or gene and to store this information directly in a knowledge base. BEEDS applies a transformer-based architecture for event extraction and uses distant supervision to augment the scarce training data in event mining. In a knowledge base population setting, it outperforms a strong baseline in finding post-translational modification events consisting of enzyme-substrate-site triples while achieving competitive results in extracting binary relations consisting of protein-protein and protein-site interactions.
null
null
10.18653/v1/2022.bionlp-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,475
inproceedings
kim-nakashole-2022-data
Data Augmentation for Rare Symptoms in Vaccine Side-Effect Detection
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.29/
Kim, Bosung and Nakashole, Ndapa
Proceedings of the 21st Workshop on Biomedical Language Processing
310--315
We study the problem of entity detection and normalization applied to patient self-reports of symptoms that arise as side-effects of vaccines. Our application domain presents unique challenges that render traditional classification methods ineffective: the number of entity types is large; and many symptoms are rare, resulting in a long-tail distribution of training examples per entity type. We tackle these challenges with an autoregressive model that generates standardized names of symptoms. We introduce a data augmentation technique to increase the number of training examples for rare symptoms. Experiments on real-life patient vaccine symptom self-reports show that our approach outperforms strong baselines, and that additional examples improve performance on the long-tail entities.
null
null
10.18653/v1/2022.bionlp-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,476
inproceedings
mitrofan-pais-2022-improving
Improving {R}omanian {B}io{NER} Using a Biologically Inspired System
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.30/
Mitrofan, Maria and Pais, Vasile
Proceedings of the 21st Workshop on Biomedical Language Processing
316--322
Recognition of named entities present in text is an important step towards information extraction and natural language understanding. This work presents a named entity recognition system for the Romanian biomedical domain. The system makes use of a new and extended version of SiMoNERo corpus, that is open sourced. Also, the best system is available for direct usage in the RELATE platform.
null
null
10.18653/v1/2022.bionlp-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,477
inproceedings
sazzed-2022-banglabiomed
{B}angla{B}io{M}ed: A Biomedical Named-Entity Annotated Corpus for {B}angla ({B}engali)
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.31/
Sazzed, Salim
Proceedings of the 21st Workshop on Biomedical Language Processing
323--329
Recognizing biomedical entities in the text has significance in biomedical and health science research, as it benefits myriad downstream tasks, including entity linking, relation extraction, or entity resolution. While English and a few other widely used languages enjoy ample resources for automatic biomedical entity recognition, it is not the case for Bangla, a low-resource language. On that account, in this paper, we introduce BanglaBioMed, a Bangla biomedical named entity (NE) annotated dataset in standard IOB format, the first of its kind, consisting of over 12000 tokens annotated with the biomedical entities. The corpus is created by collecting Bangla text from a list of health articles and then annotated with four distinct types of entities: Anatomy (AN), Chemical and Drugs (CD), Disease and Symptom (DS), and Medical Procedure (MP). We provide the details of the entire data collection and annotation procedure and illustrate various statistics of the created corpus. Our developed corpus is a much-needed addition to the Bangla NLP resource that will facilitate biomedical NLP research in Bangla.
null
null
10.18653/v1/2022.bionlp-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,478
inproceedings
michalopoulos-etal-2022-icdbigbird
{ICDB}ig{B}ird: A Contextual Embedding Model for {ICD} Code Classification
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.32/
Michalopoulos, George and Malyska, Michal and Sahar, Nicola and Wong, Alexander and Chen, Helen
Proceedings of the 21st Workshop on Biomedical Language Processing
330--336
The International Classification of Diseases (ICD) system is the international standard for classifying diseases and procedures during a healthcare encounter and is widely used for healthcare reporting and management purposes. Assigning correct codes for clinical procedures is important for clinical, operational and financial decision-making in healthcare. Contextual word embedding models have achieved state-of-the-art results in multiple NLP tasks. However, these models have yet to achieve state-of-the-art results in the ICD classification task since one of their main disadvantages is that they can only process documents that contain a small number of tokens which is rarely the case with real patient notes. In this paper, we introduce ICDBigBird a BigBird-based model which can integrate a Graph Convolutional Network (GCN), that takes advantage of the relations between ICD codes in order to create {\textquoteleft}enriched' representations of their embeddings, with a BigBird contextual model that can process larger documents. Our experiments on a real-world clinical dataset demonstrate the effectiveness of our BigBird-based model on the ICD classification task as it outperforms the previous state-of-the-art models.
null
null
10.18653/v1/2022.bionlp-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,479
inproceedings
ngai-rudzicz-2022-doctor
Doctor {XA}v{I}er: Explainable Diagnosis on Physician-Patient Dialogues and {XAI} Evaluation
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.33/
Ngai, Hillary and Rudzicz, Frank
Proceedings of the 21st Workshop on Biomedical Language Processing
337--344
We introduce Doctor XAvIer {---} a BERT-based diagnostic system that extracts relevant clinical data from transcribed patient-doctor dialogues and explains predictions using feature attribution methods. We present a novel performance plot and evaluation metric for feature attribution methods {---} Feature Attribution Dropping (FAD) curve and its Normalized Area Under the Curve (N-AUC). FAD curve analysis shows that integrated gradients outperforms Shapley values in explaining diagnosis classification. Doctor XAvIer outperforms the baseline with 0.97 F1-score in named entity recognition and symptom pertinence classification and 0.91 F1-score in diagnosis classification.
null
null
10.18653/v1/2022.bionlp-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,480
inproceedings
dhrangadhariya-muller-2022-distant
{DISTANT}-{CTO}: A Zero Cost, Distantly Supervised Approach to Improve Low-Resource Entity Extraction Using Clinical Trials Literature
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.34/
Dhrangadhariya, Anjani and M{\"uller, Henning
Proceedings of the 21st Workshop on Biomedical Language Processing
345--358
PICO recognition is an information extraction task for identifying participant, intervention, comparator, and outcome information from clinical literature. Manually identifying PICO information is the most time-consuming step for conducting systematic reviews (SR), which is already labor-intensive. A lack of diversified and large, annotated corpora restricts innovation and adoption of automated PICO recognition systems. The largest-available PICO entity/span corpus is manually annotated which is too expensive for a majority of the scientific community. To break through the bottleneck, we propose DISTANT-CTO, a novel distantly supervised PICO entity extraction approach using the clinical trials literature, to generate a massive weakly-labeled dataset with more than a million {\textquoteleft}Intervention' and {\textquoteleft}Comparator' entity annotations. We train distant NER (named-entity recognition) models using this weakly-labeled dataset and demonstrate that it outperforms even the sophisticated models trained on the manually annotated dataset with a 2{\%} F1 improvement over the Intervention entity of the PICO benchmark and more than 5{\%} improvement when combined with the manually annotated dataset. We investigate the generalizability of our approach and gain an impressive F1 score on another domain-specific PICO benchmark. The approach is not only zero-cost but is also scalable for a constant stream of PICO entity annotations.
null
null
10.18653/v1/2022.bionlp-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,481
inproceedings
tang-etal-2022-echogen
{E}cho{G}en: Generating Conclusions from Echocardiogram Notes
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.35/
Tang, Liyan and Kooragayalu, Shravan and Wang, Yanshan and Ding, Ying and Durrett, Greg and Rousseau, Justin F. and Peng, Yifan
Proceedings of the 21st Workshop on Biomedical Language Processing
359--368
Generating a summary from findings has been recently explored (Zhang et al., 2018, 2020) in note types such as radiology reports that typically have short length. In this work, we focus on echocardiogram notes that is longer and more complex compared to previous note types. We formally define the task of echocardiography conclusion generation (EchoGen) as generating a conclusion given the findings section, with emphasis on key cardiac findings. To promote the development of EchoGen methods, we present a new benchmark, which consists of two datasets collected from two hospitals. We further compare both standard and start-of-the-art methods on this new benchmark, with an emphasis on factual consistency. To accomplish this, we develop a tool to automatically extract concept-attribute tuples from the text. We then propose an evaluation metric, FactComp, to compare concept-attribute tuples between the human reference and generated conclusions. Both automatic and human evaluations show that there is still a significant gap between human-written and machine-generated conclusions on echo reports in terms of factuality and overall quality.
null
null
10.18653/v1/2022.bionlp-1.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,482
inproceedings
xie-etal-2022-quantifying
Quantifying Clinical Outcome Measures in Patients with Epilepsy Using the Electronic Health Record
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.36/
Xie, Kevin and Litt, Brian and Roth, Dan and Ellis, Colin A.
Proceedings of the 21st Workshop on Biomedical Language Processing
369--375
A wealth of important clinical information lies untouched in the Electronic Health Record, often in the form of unstructured textual documents. For patients with Epilepsy, such information includes outcome measures like Seizure Frequency and Dates of Last Seizure, key parameters that guide all therapy for these patients. Transformer models have been able to extract such outcome measures from unstructured clinical note text as sentences with human-like accuracy; however, these sentences are not yet usable in a quantitative analysis for large-scale studies. In this study, we developed a pipeline to quantify these outcome measures. We used text summarization models to convert unstructured sentences into specific formats, and then employed rules-based quantifiers to calculate seizure frequencies and dates of last seizure. We demonstrated that our pipeline of models does not excessively propagate errors and we analyzed its mistakes. We anticipate that our methods can be generalized outside of epilepsy to other disorders to drive large-scale clinical research.
null
null
10.18653/v1/2022.bionlp-1.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,483
inproceedings
sarrouti-etal-2022-comparing
Comparing Encoder-Only and Encoder-Decoder Transformers for Relation Extraction from Biomedical Texts: An Empirical Study on Ten Benchmark Datasets
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.37/
Sarrouti, Mourad and Tao, Carson and Mamy Randriamihaja, Yoann
Proceedings of the 21st Workshop on Biomedical Language Processing
376--382
Biomedical relation extraction, aiming to automatically discover high-quality and semantic relations between the entities from free text, is becoming a vital step for automated knowledge discovery. Pretrained language models have achieved impressive performance on various natural language processing tasks, including relation extraction. In this paper, we perform extensive empirical comparisons of encoder-only transformers with the encoder-decoder transformer, specifically T5, on ten public biomedical relation extraction datasets. We study the relation extraction task from four major biomedical tasks, namely chemical-protein relation extraction, disease-protein relation extraction, drug-drug interaction, and protein-protein interaction. We also explore the use of multi-task fine-tuning to investigate the correlation among major biomedical relation extraction tasks. We report performance (micro F-score) using T5, BioBERT and PubMedBERT, demonstrating that T5 and multi-task learning can improve the performance of the biomedical relation extraction task.
null
null
10.18653/v1/2022.bionlp-1.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,484
inproceedings
vakili-dalianis-2022-utility
Utility Preservation of Clinical Text After De-Identification
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.38/
Vakili, Thomas and Dalianis, Hercules
Proceedings of the 21st Workshop on Biomedical Language Processing
383--388
Electronic health records contain valuable information about symptoms, diagnosis, treatment and outcomes of the treatments of individual patients. However, the records may also contain information that can reveal the identity of the patients. Removing these identifiers - the Protected Health Information (PHI) - can protect the identity of the patient. Automatic de-identification is a process which employs machine learning techniques to detect and remove PHI. However, automatic techniques are imperfect in their precision and introduce noise into the data. This study examines the impact of this noise on the utility of Swedish de-identified clinical data by using human evaluators and by training and testing BERT models. Our results indicate that de-identification does not harm the utility for clinical NLP and that human evaluators are less sensitive to noise from de-identification than expected.
null
null
10.18653/v1/2022.bionlp-1.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,485
inproceedings
falis-etal-2022-horses
Horses to Zebras: Ontology-Guided Data Augmentation and Synthesis for {ICD}-9 Coding
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.39/
Falis, Mat{\'u}{\v{s}} and Dong, Hang and Birch, Alexandra and Alex, Beatrice
Proceedings of the 21st Workshop on Biomedical Language Processing
389--401
Medical document coding is the process of assigning labels from a structured label space (ontology {--} e.g., ICD-9) to medical documents. This process is laborious, costly, and error-prone. In recent years, efforts have been made to automate this process with neural models. The label spaces are large (in the order of thousands of labels) and follow a big-head long-tail label distribution, giving rise to few-shot and zero-shot scenarios. Previous efforts tried to address these scenarios within the model, leading to improvements on rare labels, but worse results on frequent ones. We propose data augmentation and synthesis techniques in order to address these scenarios. We further introduce an analysis technique for this setting inspired by confusion matrices. This analysis technique points to the positive impact of data augmentation and synthesis, but also highlights more general issues of confusion within families of codes, and underprediction.
null
null
10.18653/v1/2022.bionlp-1.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,486
inproceedings
chandak-etal-2022-towards
Towards Automatic Curation of Antibiotic Resistance Genes via Statement Extraction from Scientific Papers: A Benchmark Dataset and Models
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.40/
Chandak, Sidhant and Zhang, Liqing and Brown, Connor and Huang, Lifu
Proceedings of the 21st Workshop on Biomedical Language Processing
402--411
Antibiotic resistance has become a growing worldwide concern as new resistance mechanisms are emerging and spreading globally, and thus detecting and collecting the cause {--} Antibiotic Resistance Genes (ARGs), have been more critical than ever. In this work, we aim to automate the curation of ARGs by extracting ARG-related assertive statements from scientific papers. To support the research towards this direction, we build SciARG, a new benchmark dataset containing 2,000 manually annotated statements as the evaluation set and 12,516 silver-standard training statements that are automatically created from scientific papers by a set of rules. To set up the baseline performance on SciARG, we exploit three state-of-the-art neural architectures based on pre-trained language models and prompt tuning, and further ensemble them to attain the highest 77.0{\%} F-score. To the best of our knowledge, we are the first to leverage natural language processing techniques to curate all validated ARGs from scientific papers. Both the code and data are publicly available at \url{https://github.com/VT-NLP/SciARG}.
null
null
10.18653/v1/2022.bionlp-1.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,487
inproceedings
wood-doughty-etal-2022-model
Model Distillation for Faithful Explanations of Medical Code Predictions
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.41/
Wood-Doughty, Zach and Cachola, Isabel and Dredze, Mark
Proceedings of the 21st Workshop on Biomedical Language Processing
412--425
Machine learning models that offer excellent predictive performance often lack the interpretability necessary to support integrated human machine decision-making. In clinical medicine and other high-risk settings, domain experts may be unwilling to trust model predictions without explanations. Work in explainable AI must balance competing objectives along two different axes: 1) Models should ideally be both accurate and simple. 2) Explanations must balance faithfulness to the model`s decision-making with their plausibility to a domain expert. We propose to use knowledge distillation, or training a student model that mimics the behavior of a trained teacher model, as a technique to generate faithful and plausible explanations. We evaluate our approach on the task of assigning ICD codes to clinical notes to demonstrate that the student model is faithful to the teacher model`s behavior and produces quality natural language explanations.
null
null
10.18653/v1/2022.bionlp-1.41
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,488
inproceedings
liang-etal-2022-towards
Towards Generalizable Methods for Automating Risk Score Calculation
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.42/
Liang, Jennifer J and Lehman, Eric and Iyengar, Ananya and Mahajan, Diwakar and Raghavan, Preethi and Chang, Cindy Y. and Szolovits, Peter
Proceedings of the 21st Workshop on Biomedical Language Processing
426--431
Clinical risk scores enable clinicians to tabulate a set of patient data into simple scores to stratify patients into risk categories. Although risk scores are widely used to inform decision-making at the point-of-care, collecting the information necessary to calculate such scores requires considerable time and effort. Previous studies have focused on specific risk scores and involved manual curation of relevant terms or codes and heuristics for each data element of a risk score. To support more generalizable methods for risk score calculation, we annotate 100 patients in MIMIC-III with elements of CHA2DS2-VASc and PERC scores, and explore using question answering (QA) and off-the-shelf tools. We show that QA models can achieve comparable or better performance for certain risk score elements as compared to heuristic-based methods, and demonstrate the potential for more scalable risk score automation without the need for expert-curated heuristics. Our annotated dataset will be released to the community to encourage efforts in generalizable methods for automating risk scores.
null
null
10.18653/v1/2022.bionlp-1.42
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,489
inproceedings
kusa-etal-2022-dossier
{D}o{SSIER} at {M}ed{V}id{QA} 2022: Text-based Approaches to Medical Video Answer Localization Problem
Demner-Fushman, Dina and Cohen, Kevin Bretonnel and Ananiadou, Sophia and Tsujii, Junichi
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.bionlp-1.43/
Kusa, Wojciech and Peikos, Georgios and Espitia, {\'O}scar and Hanbury, Allan and Pasi, Gabriella
Proceedings of the 21st Workshop on Biomedical Language Processing
432--440
This paper describes our contribution to the Answer Localization track of the MedVidQA 2022 Shared Task. We propose two answer localization approaches that use only textual information extracted from the video. In particular, our approaches exploit the text extracted from the video`s transcripts along with the text displayed in the video`s frames to create a set of features. Having created a set of features that represents a video`s textual information, we employ four different models to measure the similarity between a video`s segment and a corresponding question. Then, we employ two different methods to obtain the start and end times of the identified answer. One of them is based on a random forest regressor, whereas the other one uses an unsupervised peak detection model to detect the answer`s start time. Our findings suggest that for this task, leveraging only text-related features (transmitted either verbally or visually) and using a small amount of training data, lead to significant improvements over the benchmark Video Span Localization model that is based on deep neural networks.
null
null
10.18653/v1/2022.bionlp-1.43
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,490
inproceedings
herve-etal-2022-using
Using {ASR}-Generated Text for Spoken Language Modeling
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.2/
Herv{\'e}, Nicolas and Pelloin, Valentin and Favre, Benoit and Dary, Franck and Laurent, Antoine and Meignier, Sylvain and Besacier, Laurent
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
17--25
This papers aims at improving spoken language modeling (LM) using very large amount of automatically transcribed speech. We leverage the INA (French National Audiovisual Institute) collection and obtain 19GB of text after applying ASR on 350,000 hours of diverse TV shows. From this, spoken language models are trained either by fine-tuning an existing LM (FlauBERT) or through training a LM from scratch. The new models (FlauBERT-Oral) will be shared with the community and are evaluated not only in terms of word prediction accuracy but also for two downstream tasks : classification of TV shows and syntactic parsing of speech. Experimental results show that FlauBERT-Oral is better than its initial FlauBERT version demonstrating that, despite its inherent noisy nature, ASR-Generated text can be useful to improve spoken language modeling.
null
null
10.18653/v1/2022.bigscience-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,493
inproceedings
talat-etal-2022-reap
You reap what you sow: On the Challenges of Bias Evaluation Under Multilingual Settings
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.3/
Talat, Zeerak and N{\'e}v{\'e}ol, Aur{\'e}lie and Biderman, Stella and Clinciu, Miruna and Dey, Manan and Longpre, Shayne and Luccioni, Sasha and Masoud, Maraim and Mitchell, Margaret and Radev, Dragomir and Sharma, Shanya and Subramonian, Arjun and Tae, Jaesung and Tan, Samson and Tunuguntla, Deepak and Van Der Wal, Oskar
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
26--41
Evaluating bias, fairness, and social impact in monolingual language models is a difficult task. This challenge is further compounded when language modeling occurs in a multilingual context. Considering the implication of evaluation biases for large multilingual language models, we situate the discussion of bias evaluation within a wider context of social scientific research with computational work. We highlight three dimensions of developing multilingual bias evaluation frameworks: (1) increasing transparency through documentation, (2) expanding targets of bias beyond gender, and (3) addressing cultural differences that exist between languages. We further discuss the power dynamics and consequences of training large language models and recommend that researchers remain cognizant of the ramifications of developing such technologies.
null
null
10.18653/v1/2022.bigscience-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,494
inproceedings
kobayashi-etal-2022-diverse
Diverse Lottery Tickets Boost Ensemble from a Single Pretrained Model
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.4/
Kobayashi, Sosuke and Kiyono, Shun and Suzuki, Jun and Inui, Kentaro
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
42--50
Ensembling is a popular method used to improve performance as a last resort. However, ensembling multiple models finetuned from a single pretrained model has been not very effective; this could be due to the lack of diversity among ensemble members. This paper proposes Multi-Ticket Ensemble, which finetunes different subnetworks of a single pretrained model and ensembles them. We empirically demonstrated that winning-ticket subnetworks produced more diverse predictions than dense networks and their ensemble outperformed the standard ensemble in some tasks when accurate lottery tickets are found on the tasks.
null
null
10.18653/v1/2022.bigscience-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,495
inproceedings
chan-etal-2022-unirex
{UNIREX}: A Unified Learning Framework for Language Model Rationale Extraction
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.5/
Chan, Aaron and Sanjabi, Maziar and Mathias, Lambert and Tan, Liang and Nie, Shaoliang and Peng, Xiaochang and Ren, Xiang and Firooz, Hamed
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
51--67
An extractive rationale explains a language model`s (LM`s) prediction on a given task instance by highlighting the text inputs that most influenced the prediction. Ideally, rationale extraction should be faithful (reflective of LM`s actual behavior) and plausible (convincing to humans), without compromising the LM`s (i.e., task model`s) task performance. Although attribution algorithms and select-predict pipelines are commonly used in rationale extraction, they both rely on certain heuristics that hinder them from satisfying all three desiderata. In light of this, we propose UNIREX, a flexible learning framework which generalizes rationale extractor optimization as follows: (1) specify architecture for a learned rationale extractor; (2) select explainability objectives (i.e., faithfulness and plausibility criteria); and (3) jointly the train task model and rationale extractor on the task using selected objectives. UNIREX enables replacing prior works' heuristic design choices with a generic learned rationale extractor in (1) and optimizing it for all three desiderata in (2)-(3). To facilitate comparison between methods w.r.t. multiple desiderata, we introduce the Normalized Relative Gain (NRG) metric. Across five English text classification datasets, our best UNIREX configuration outperforms the strongest baselines by an average of 32.9{\%} NRG. Plus, we find that UNIREX-trained rationale extractors' faithfulness can even generalize to unseen datasets and tasks.
null
null
10.18653/v1/2022.bigscience-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,496
inproceedings
nozza-etal-2022-pipelines
Pipelines for Social Bias Testing of Large Language Models
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.6/
Nozza, Debora and Bianchi, Federico and Hovy, Dirk
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
68--74
The maturity level of language models is now at a stage in which many companies rely on them to solve various tasks. However, while research has shown how biased and harmful these models are, systematic ways of integrating social bias tests into development pipelines are still lacking. This short paper suggests how to use these verification techniques in development pipelines. We take inspiration from software testing and suggest addressing social bias evaluation as software testing. We hope to open a discussion on the best methodologies to handle social bias testing in language models.
null
null
10.18653/v1/2022.bigscience-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,497
inproceedings
de-toni-etal-2022-entities
Entities, Dates, and Languages: Zero-Shot on Historical Texts with T0
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.7/
De Toni, Francesco and Akiki, Christopher and De La Rosa, Javier and Fourrier, Cl{\'e}mentine and Manjavacas, Enrique and Schweter, Stefan and Van Strien, Daniel
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
75--83
In this work, we explore whether the recently demonstrated zero-shot abilities of the T0 model extend to Named Entity Recognition for out-of-distribution languages and time periods. Using a historical newspaper corpus in 3 languages as test-bed, we use prompts to extract possible named entities. Our results show that a naive approach for prompt-based zero-shot multilingual Named Entity Recognition is error-prone, but highlights the potential of such an approach for historical languages lacking labeled datasets. Moreover, we also find that T0-like models can be probed to predict the publication date and language of a document, which could be very relevant for the study of historical texts.
null
null
10.18653/v1/2022.bigscience-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,498
inproceedings
lakim-etal-2022-holistic
A Holistic Assessment of the Carbon Footprint of Noor, a Very Large {A}rabic Language Model
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.8/
Lakim, Imad and Almazrouei, Ebtesam and Abualhaol, Ibrahim and Debbah, Merouane and Launay, Julien
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
84--94
As ever larger language models grow more ubiquitous, it is crucial to consider their environmental impact. Characterised by extreme size and resource use, recent generations of models have been criticised for their voracious appetite for compute, and thus significant carbon footprint. Although reporting of carbon impact has grown more common in machine learning papers, this reporting is usually limited to compute resources used strictly for training. In this work, we propose a holistic assessment of the footprint of an extreme-scale language model, Noor. Noor is an ongoing project aiming to develop the largest multi-task Arabic language models{--}with up to 13B parameters{--}leveraging zero-shot generalisation to enable a wide range of downstream tasks via natural language instructions. We assess the total carbon bill of the entire project: starting with data collection and storage costs, including research and development budgets, pretraining costs, future serving estimates, and other exogenous costs necessary for this international cooperation. Notably, we find that inference costs and exogenous factors can have a significant impact on total budget. Finally, we discuss pathways to reduce the carbon footprint of extreme-scale models.
null
null
10.18653/v1/2022.bigscience-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,499
inproceedings
black-etal-2022-gpt
{GPT}-{N}eo{X}-20{B}: An Open-Source Autoregressive Language Model
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.9/
Black, Sidney and Biderman, Stella and Hallahan, Eric and Anthony, Quentin and Gao, Leo and Golding, Laurence and He, Horace and Leahy, Connor and McDonell, Kyle and Phang, Jason and Pieler, Michael and Prashanth, Usvsn Sai and Purohit, Shivanshu and Reynolds, Laria and Tow, Jonathan and Wang, Ben and Weinbach, Samuel
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
95--136
We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission. In this work, we describe GPT-NeoX-20B`s architecture and training, and evaluate its performance. We open-source the training and evaluation code, as well as the model weights, at \url{https://github.com/EleutherAI/gpt-neox}.
null
null
10.18653/v1/2022.bigscience-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,500
inproceedings
fries-etal-2022-dataset
Dataset Debt in Biomedical Language Modeling
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.10/
Fries, Jason and Seelam, Natasha and Altay, Gabriel and Weber, Leon and Kang, Myungsun and Datta, Debajyoti and Su, Ruisi and Garda, Samuele and Wang, Bo and Ott, Simon and Samwald, Matthias and Kusa, Wojciech
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
137--145
Large-scale language modeling and natural language prompting have demonstrated exciting capabilities for few and zero shot learning in NLP. However, translating these successes to specialized domains such as biomedicine remains challenging, due in part to biomedical NLP`s significant dataset debt {--} the technical costs associated with data that are not consistently documented or easily incorporated into popular machine learning frameworks at scale. To assess this debt, we crowdsourced curation of datasheets for 167 biomedical datasets. We find that only 13{\%} of datasets are available via programmatic access and 30{\%} lack any documentation on licensing and permitted reuse. Our dataset catalog is available at: \url{https://tinyurl.com/bigbio22}.
null
null
10.18653/v1/2022.bigscience-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,501
inproceedings
teehan-etal-2022-emergent
Emergent Structures and Training Dynamics in Large Language Models
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.11/
Teehan, Ryan and Clinciu, Miruna and Serikov, Oleg and Szczechla, Eliza and Seelam, Natasha and Mirkin, Shachar and Gokaslan, Aaron
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
146--159
Large language models have achieved success on a number of downstream tasks, particularly in a few and zero-shot manner. As a consequence, researchers have been investigating both the kind of information these networks learn and how such information can be encoded in the parameters of the model. We survey the literature on changes in the network during training, drawing from work outside of NLP when necessary, and on learned representations of linguistic features in large language models. We note in particular the lack of sufficient research on the emergence of functional units, subsections of the network where related functions are grouped or organised, within large language models and motivate future work that grounds the study of language models in an analysis of their changing internal structure during training time.
null
null
10.18653/v1/2022.bigscience-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,502
inproceedings
horawalavithana-etal-2022-foundation
Foundation Models of Scientific Knowledge for Chemistry: Opportunities, Challenges and Lessons Learned
Fan, Angela and Ilic, Suzana and Wolf, Thomas and Gall{\'e}, Matthias
may
2022
virtual+Dublin
Association for Computational Linguistics
https://aclanthology.org/2022.bigscience-1.12/
Horawalavithana, Sameera and Ayton, Ellyn and Sharma, Shivam and Howland, Scott and Subramanian, Megha and Vasquez, Scott and Cosbey, Robin and Glenski, Maria and Volkova, Svitlana
Proceedings of BigScience Episode {\#}5 -- Workshop on Challenges {\&} Perspectives in Creating Large Language Models
160--172
Foundation models pre-trained on large corpora demonstrate significant gains across many natural language processing tasks and domains e.g., law, healthcare, education, etc. However, only limited efforts have investigated the opportunities and limitations of applying these powerful models to science and security applications. In this work, we develop foundation models of scientific knowledge for chemistry to augment scientists with the advanced ability to perceive and reason at scale previously unimagined. Specifically, we build large-scale (1.47B parameter) general-purpose models for chemistry that can be effectively used to perform a wide range of in-domain and out-of-domain tasks. Evaluating these models in a zero-shot setting, we analyze the effect of model and data scaling, knowledge depth, and temporality on model performance in context of model training efficiency. Our novel findings demonstrate that (1) model size significantly contributes to the task performance when evaluated in a zero-shot setting; (2) data quality (aka diversity) affects model performance more than data quantity; (3) similarly, unlike previous work, temporal order of the documents in the corpus boosts model performance only for specific tasks, e.g., SciQ; and (4) models pre-trained from scratch perform better on in-domain tasks than those tuned from general-purpose models like Open AI`s GPT-2.
null
null
10.18653/v1/2022.bigscience-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,503
inproceedings
kwako-etal-2022-using
Using Item Response Theory to Measure Gender and Racial Bias of a {BERT}-based Automated {E}nglish Speech Assessment System
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.1/
Kwako, Alexander and Wan, Yixin and Zhao, Jieyu and Chang, Kai-Wei and Cai, Li and Hansen, Mark
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
1--7
Recent advances in natural language processing and transformer-based models have made it easier to implement accurate, automated English speech assessments. Yet, without careful examination, applications of these models may exacerbate social prejudices based on gender and race. This study addresses the need to examine potential biases of transformer-based models in the context of automated English speech assessment. For this purpose, we developed a BERT-based automated speech assessment system and investigated gender and racial bias of examinees' automated scores. Gender and racial bias was measured by examining differential item functioning (DIF) using an item response theory framework. Preliminary results, which focused on a single verbal-response item, showed no statistically significant DIF based on gender or race for automated scores.
null
null
10.18653/v1/2022.bea-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,505
inproceedings
takano-ichikawa-2022-automatic
Automatic scoring of short answers using justification cues estimated by {BERT}
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.2/
Takano, Shunya and Ichikawa, Osamu
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
8--13
Automated scoring technology for short-answer questions has been attracting attention to improve the fairness of scoring and reduce the burden on the scorer. In general, a large amount of data is required to train an automated scoring model. The training data consists of the answer texts and the scoring data assigned to them. It may also include annotations indicating key word sequences. These data must be prepared manually, which is costly. Many previous studies have created models with large amounts of training data specific to each question. This paper aims to achieve equivalent performance with less training data by utilizing a BERT model that has been pre-trained on a large amount of general text data not necessarily related to short answer questions. On the RIKEN dataset, the proposed method reduces the training data from the 800 data required in the past to about 400 data, and still achieves scoring accuracy comparable to that of humans.
null
null
10.18653/v1/2022.bea-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,506
inproceedings
jalota-etal-2022-mitigating
Mitigating Learnerese Effects for {CEFR} Classification
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.3/
Jalota, Rricha and Bourgonje, Peter and Van Sas, Jan and Huang, Huiyan
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
14--21
The role of an author`s L1 in SLA can be challenging for automated CEFR classification, in that texts from different L1 groups may be too heterogeneous to combine them as training data. We experiment with recent debiasing approaches by attempting to devoid textual representations of L1 features. This results in a more homogeneous group when aggregating CEFR-annotated texts from different L1 groups, leading to better classification performance. Using iterative null-space projection, we marginally improve classification performance for a linear classifier by 1 point. An MLP (e.g. non-linear) classifier remains unaffected by this procedure. We discuss possible directions of future work to attempt to increase this performance gain.
null
null
10.18653/v1/2022.bea-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,507
inproceedings
chen-etal-2022-automatically
Automatically Detecting Reduced-formed {E}nglish Pronunciations by Using Deep Learning
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.4/
Chen, Lei and Jiang, Chenglin and Gu, Yiwei and Liu, Yang and Yuan, Jiahong
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
22--26
Reduced form pronunciations are widely used by native English speakers, especially in casual conversations. Second language (L2) learners have difficulty in processing reduced form pronunciations in listening comprehension and face challenges in production too. Meanwhile, training applications dedicated to reduced forms are still few. To solve this issue, we report on our first effort of using deep learning to evaluate L2 learners' reduced form pronunciations. Compared with a baseline solution that uses an ASR to determine regular or reduced-formed pronunciations, a classifier that learns representative features via a convolution neural network (CNN) on low-level acoustic features, yields higher detection performance. F-1 metric has been increased from 0.690 to 0.757 on the reduction task. Furthermore, adding word entities to compute attention weights to better adjust the features learned by the CNN model helps increasing F-1 to 0.763.
null
null
10.18653/v1/2022.bea-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,508
inproceedings
imperial-etal-2022-baseline
A Baseline Readability Model for {C}ebuano
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.5/
Imperial, Joseph Marvin and Reyes, Lloyd Lois Antonie and Ibanez, Michael Antonio and Sapinit, Ranz and Hussien, Mohammed
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
27--32
In this study, we developed the first baseline readability model for the Cebuano language. Cebuano is the second most-used native language in the Philippines with about 27.5 million speakers. As the baseline, we extracted traditional or surface-based features, syllable patterns based from Cebuano`s documented orthography, and neural embeddings from the multilingual BERT model. Results show that the use of the first two handcrafted linguistic features obtained the best performance trained on an optimized Random Forest model with approximately 87{\%} across all metrics. The feature sets and algorithm used also is similar to previous results in readability assessment for the Filipino language{---}showing potential of crosslingual application. To encourage more work for readability assessment in Philippine languages such as Cebuano, we open-sourced both code and data.
null
null
10.18653/v1/2022.bea-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,509
inproceedings
casademont-moner-volodina-2022-generation
Generation of Synthetic Error Data of Verb Order Errors for {S}wedish
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.6/
Casademont Moner, Judit and Volodina, Elena
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
33--38
We report on our work-in-progress to generate a synthetic error dataset for Swedish by replicating errors observed in the authentic error annotated dataset. We analyze a small subset of authentic errors, capture regular patterns based on parts of speech, and design a set of rules to corrupt new data. We explore the approach and identify its capabilities, advantages and limitations as a way to enrich the existing collection of error-annotated data. This work focuses on word order errors, specifically those involving the placement of finite verbs in a sentence.
null
null
10.18653/v1/2022.bea-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,510
inproceedings
kyle-etal-2022-dependency
A Dependency Treebank of Spoken Second Language {E}nglish
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.7/
Kyle, Kristopher and Eguchi, Masaki and Miller, Aaron and Sither, Theodore
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
39--45
In this paper, we introduce a dependency treebank of spoken second language (L2) English that is annotated with part of speech (Penn POS) tags and syntactic dependencies (Universal Dependencies). We then evaluate the degree to which the use of this treebank as training data affects POS and UD annotation accuracy for L1 web texts, L2 written texts, and L2 spoken texts as compared to models trained on L1 texts only.
null
null
10.18653/v1/2022.bea-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,511
inproceedings
jia-etal-2022-starting
Starting from {\textquotedblleft}Zero{\textquotedblright}: An Incremental Zero-shot Learning Approach for Assessing Peer Feedback Comments
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.8/
Jia, Qinjin and Cao, Yupeng and Gehringer, Edward
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
46--50
Peer assessment is an effective and efficient pedagogical strategy for delivering feedback to learners. Asking students to provide quality feedback, which contains suggestions and mentions problems, can promote metacognition by reviewers and better assist reviewees in revising their work. Thus, various supervised machine learning algorithms have been proposed to detect quality feedback. However, all these powerful algorithms have the same Achilles' heel: the reliance on sufficient historical data. In other words, collecting adequate peer feedback for training a supervised algorithm can take several semesters before the model can be deployed to a new class. In this paper, we present a new paradigm, called incremental zero-shot learning (IZSL), to tackle the problem of lacking sufficient historical data. Our results show that the method can achieve acceptable {\textquotedblleft}cold-start{\textquotedblright} performance without needing any domain data, and it outperforms BERT when trained on the same data collected incrementally.
null
null
10.18653/v1/2022.bea-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,512
inproceedings
lu-etal-2022-assessing
On Assessing and Developing Spoken `Grammatical Error Correction' Systems
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.9/
Lu, Yiting and Bann{\`o}, Stefano and Gales, Mark
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
51--60
Spoken {\textquoteleft}grammatical error correction' (SGEC) is an important process to provide feedback for second language learning. Due to a lack of end-to-end training data, SGEC is often implemented as a cascaded, modular system, consisting of speech recognition, disfluency removal, and grammatical error correction (GEC). This cascaded structure enables efficient use of training data for each module. It is, however, difficult to compare and evaluate the performance of individual modules as preceeding modules may introduce errors. For example the GEC module input depends on the output of non-native speech recognition and disfluency detection, both challenging tasks for learner data. This paper focuses on the assessment and development of SGEC systems. We first discuss metrics for evaluating SGEC, both individual modules and the overall system. The system-level metrics enable tuning for optimal system performance. A known issue in cascaded systems is error propagation between modules. To mitigate this problem semi-supervised approaches and self-distillation are investigated. Lastly, when SGEC system gets deployed it is important to give accurate feedback to users. Thus, we apply filtering to remove edits with low-confidence, aiming to improve overall feedback precision. The performance metrics are examined on a Linguaskill multi-level data set, which includes the original non-native speech, manual transcriptions and reference grammatical error corrections, to enable system analysis and development.
null
null
10.18653/v1/2022.bea-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,513
inproceedings
zou-etal-2022-automatic
Automatic True/False Question Generation for Educational Purpose
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.10/
Zou, Bowei and Li, Pengfei and Pan, Liangming and Aw, Ai Ti
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
61--70
In field of teaching, true/false questioning is an important educational method for assessing students' general understanding of learning materials. Manually creating such questions requires extensive human effort and expert knowledge. Question Generation (QG) technique offers the possibility to automatically generate a large number of questions. However, there is limited work on automatic true/false question generation due to the lack of training data and difficulty finding question-worthy content. In this paper, we propose an unsupervised True/False Question Generation approach (TF-QG) that automatically generates true/false questions from a given passage for reading comprehension test. TF-QG consists of a template-based framework that aims to test the specific knowledge in the passage by leveraging various NLP techniques, and a generative framework to generate more flexible and complicated questions by using a novel masking-and-infilling strategy. Human evaluation shows that our approach can generate high-quality and valuable true/false questions. In addition, simulated testing on the generated questions challenges the state-of-the-art inference models from NLI, QA, and fact verification tasks.
null
null
10.18653/v1/2022.bea-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,514
inproceedings
suresh-etal-2022-fine
Fine-tuning Transformers with Additional Context to Classify Discursive Moves in Mathematics Classrooms
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.11/
Suresh, Abhijit and Jacobs, Jennifer and Perkoff, Margaret and Martin, James H. and Sumner, Tamara
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
71--81
{\textquotedblleft}Talk moves{\textquotedblright} are specific discursive strategies used by teachers and students to facilitate conversations in which students share their thinking, and actively consider the ideas of others, and engage in rich discussions. Experts in instructional practices often rely on cues to identify and document these strategies, for example by annotating classroom transcripts. Prior efforts to develop automated systems to classify teacher talk moves using transformers achieved a performance of 76.32{\%} F1. In this paper, we investigate the feasibility of using enriched contextual cues to improve model performance. We applied state-of-the-art deep learning approaches for Natural Language Processing (NLP), including Robustly optimized bidirectional encoder representations from transformers (Roberta) with a special input representation that supports previous and subsequent utterances as context for talk moves classification. We worked with the publically available TalkMoves dataset, which contains utterances sourced from real-world classroom sessions (human- transcribed and annotated). Through a series of experimentations, we found that a combination of previous and subsequent utterances improved the transformers' ability to differentiate talk moves (by 2.6{\%} F1). These results constitute a new state of the art over previously published results and provide actionable insights to those in the broader NLP community who are working to develop similar transformer-based classification models.
null
null
10.18653/v1/2022.bea-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,515
inproceedings
banno-matassoni-2022-cross
Cross-corpora experiments of automatic proficiency assessment and error detection for spoken {E}nglish
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.12/
Bann{\`o}, Stefano and Matassoni, Marco
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
82--91
The growing demand for learning English as a second language has led to an increasing interest in automatic approaches for assessing spoken language proficiency. One of the most significant challenges in this field is the lack of publicly available annotated spoken data. Another common issue is the lack of consistency and coherence in human assessment. To tackle both problems, in this paper we address the task of automatically predicting the scores of spoken test responses of English-as-a-second-language learners by training neural models on written data and using the presence of grammatical errors as a feature, as they can be considered consistent indicators of proficiency through their distribution and frequency. Specifically, we train a feature extractor on EFCAMDAT, a large written corpus containing error annotations and proficiency levels assigned by human experts, in order to extract information related to grammatical errors and, in turn, we use the resulting model for inference on the CLC-FCE corpus, on the ICNALE corpus, and on the spoken section of the TLT-school corpus, a collection of proficiency tests taken by Italian students. The work investigates the impact of the feature extractor on spoken proficiency assessment as well as the written-to-spoken approach. We find that our error-based approach can be beneficial for assessing spoken proficiency. The results obtained on the considered datasets are discussed and evaluated with appropriate metrics.
null
null
10.18653/v1/2022.bea-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,516
inproceedings
dutta-etal-2022-activity
Activity focused Speech Recognition of Preschool Children in Early Childhood Classrooms
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.13/
Dutta, Satwik and Irvin, Dwight and Buzhardt, Jay and Hansen, John H.L.
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
92--100
A supportive environment is vital for overall cognitive development in children. Challenges with direct observation and limitations of access to data driven approaches often hinder teachers or practitioners in early childhood research to modify or enhance classroom structures. Deploying sensor based tools in naturalistic preschool classrooms will thereby help teachers/practitioners to make informed decisions and better support student learning needs. In this study, two elements of eco-behavioral assessment: conversational speech and real-time location are fused together. While various challenges remain in developing Automatic Speech Recognition systems for spontaneous preschool children speech, efforts are made to develop a hybrid ASR engine reporting an effective Word-Error-Rate of 40{\%}. The ASR engine further supports recognition of spoken words, WH-words, and verbs in various activity learning zones in a naturalistic preschool classroom scenario. Activity areas represent various locations within the physical ecology of an early childhood setting, each of which is suited for knowledge and skill enhancement in young children. Capturing children`s communication engagement in such areas could help teachers/practitioners fine-tune their daily activities, without the need for direct observation. This investigation provides evidence of the use of speech technology in educational settings to better support such early childhood intervention.
null
null
10.18653/v1/2022.bea-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,517
inproceedings
loginova-benoit-2022-structural
Structural information in mathematical formulas for exercise difficulty prediction: a comparison of {NLP} representations
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.14/
Loginova, Ekaterina and Benoit, Dries
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
101--106
To tailor a learning system to the student`s level and needs, we must consider the characteristics of the learning content, such as its difficulty. While natural language processing allows us to represent text efficiently, the meaningful representation of mathematical formulas in an educational context is still understudied. This paper adopts structural embeddings as a possible way to bridge this gap. Our experiments validate the approach using publicly available datasets to show that incorporating syntactic information can improve performance in predicting the exercise difficulty.
null
null
10.18653/v1/2022.bea-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,518
inproceedings
rietsche-etal-2022-specificity
The Specificity and Helpfulness of Peer-to-Peer Feedback in Higher Education
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.15/
Rietsche, Roman and Caines, Andrew and Schramm, Cornelius and Pf{\"utze, Dominik and Buttery, Paula
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
107--117
With the growth of online learning through MOOCs and other educational applications, it has become increasingly difficult for course providers to offer personalized feedback to students. Therefore asking students to provide feedback to each other has become one way to support learning. This peer-to-peer feedback has become increasingly important whether in MOOCs to provide feedback to thousands of students or in large-scale classes at universities. One of the challenges when allowing peer-to-peer feedback is that the feedback should be perceived as helpful, and an import factor determining helpfulness is how specific the feedback is. However, in classes including thousands of students, instructors do not have the resources to check the specificity of every piece of feedback between students. Therefore, we present an automatic classification model to measure sentence specificity in written feedback. The model was trained and tested on student feedback texts written in German where sentences have been labelled as general or specific. We find that we can automatically classify the sentences with an accuracy of 76.7{\%} using a conventional feature-based approach, whereas transfer learning with BERT for German gives a classification accuracy of 81.1{\%}. However, the feature-based approach comes with lower computational costs and preserves human interpretability of the coefficients. In addition we show that specificity of sentences in feedback texts has a weak positive correlation with perceptions of helpfulness. This indicates that specificity is one of the ingredients of good feedback, and invites further investigation.
null
null
10.18653/v1/2022.bea-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,519
inproceedings
bexte-etal-2022-similarity
Similarity-Based Content Scoring - How to Make {S}-{BERT} Keep Up With {BERT}
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.16/
Bexte, Marie and Horbach, Andrea and Zesch, Torsten
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
118--123
The dominating paradigm for content scoring is to learn an instance-based model, i.e. to use lexical features derived from the learner answers themselves. An alternative approach that receives much less attention is however to learn a similarity-based model. We introduce an architecture that efficiently learns a similarity model and find that results on the standard ASAP dataset are on par with a BERT-based classification approach.
null
null
10.18653/v1/2022.bea-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,520
inproceedings
ding-etal-2022-dont
Don`t Drop the Topic - The Role of the Prompt in Argument Identification in Student Writing
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.17/
Ding, Yuning and Bexte, Marie and Horbach, Andrea
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
124--133
In this paper, we explore the role of topic information in student essays from an argument mining perspective. We cluster a recently released corpus through topic modeling into prompts and train argument identification models on different data settings. Results show that, given the same amount of training data, prompt-specific training performs better than cross-prompt training. However, the advantage can be overcome by introducing large amounts of cross-prompt training data.
null
null
10.18653/v1/2022.bea-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,521
inproceedings
wambsganss-etal-2022-alen
{ALEN} App: Argumentative Writing Support To Foster {E}nglish Language Learning
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.18/
Wambsganss, Thiemo and Caines, Andrew and Buttery, Paula
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
134--140
This paper introduces a novel tool to support and engage English language learners with feedback on the quality of their argument structures. We present an approach which automatically detects claim-premise structures and provides visual feedback to the learner to prompt them to repair any broken argumentation structures. To investigate, if our persuasive feedback on language learners' essay writing tasks engages and supports them in learning better English language, we designed the ALEN app (Argumentation for Learning English). We leverage an argumentation mining model trained on texts written by students and embed it in a writing support tool which provides students with feedback in their essay writing process. We evaluated our tool in two field-studies with a total of 28 students from a German high school to investigate the effects of adaptive argumentation feedback on their learning of English. The quantitative results suggest that using the ALEN app leads to a high self-efficacy, ease-of-use, intention to use and perceived usefulness for students in their English language learning process. Moreover, the qualitative answers indicate the potential benefits of combining grammar feedback with discourse level argumentation mining.
null
null
10.18653/v1/2022.bea-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,522
inproceedings
weiss-meurers-2022-assessing
Assessing sentence readability for {G}erman language learners with broad linguistic modeling or readability formulas: When do linguistic insights make a difference?
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.19/
Weiss, Zarah and Meurers, Detmar
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
141--153
We present a new state-of-the-art sentence-wise readability assessment model for German L2 readers. We build a linguistically broadly informed machine learning model and compare its performance against four commonly used readability formulas. To understand when the linguistic insights used to inform our model make a difference for readability assessment and when simple readability formulas suffice, we compare their performance based on two common automatic readability assessment tasks: predictive regression and sentence pair ranking. We find that leveraging linguistic insights yields top performances across tasks, but that for the identification of simplified sentences also readability formulas {--} which are easier to compute and more accessible {--} can be sufficiently precise. Linguistically informed modeling, however, is the only viable option for high quality outcomes in fine-grained prediction tasks. We then explore the sentence-wise readability profile of leveled texts written for language learners at a beginning, intermediate, and advanced level of German to showcase the valuable insights that sentence-wise readability assessment can have for the adaptation of learning materials and better understand how sentences' individual readability contributes to larger texts' overall readability.
null
null
10.18653/v1/2022.bea-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,523
inproceedings
heck-meurers-2022-parametrizable
Parametrizable exercise generation from authentic texts: Effectively targeting the language means on the curriculum
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.20/
Heck, Tanja and Meurers, Detmar
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
154--166
We present a parametrizable approach to exercise generation from authentic texts that addresses the need for digital materials designed to practice the language means on the curriculum in a real-life school setting. The tool builds on a language-aware searchengine that helps identify attractive texts rich in the language means to be practiced. Making use of state-of-the-art NLP, the relevant learning targets are identified and transformed intoexercise items embedded in the original context. While the language-aware search engine ensures that these contexts match the learner{\textquoteleft}s interests based on the search term used, and the linguistic parametrization of the system then reranks the results to prioritize texts that richly represent the learning targets, for theexercise generation to proceed on this basis, an interactive configuration panel allows users to adjust exercise complexity through a range of parameters specifying both properties of thesource sentences and of the exercises. An evaluation of exercises generated from web documents for a representative sample of language means selected from the English curriculum of 7th grade in German secondary school showed that the ombination of language-aware search and exercise generationsuccessfully facilitates the process of generating exercises from authentic texts that support practice of the pedagogical targets.
null
null
10.18653/v1/2022.bea-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,524
inproceedings
keim-littman-2022-selecting
Selecting Context Clozes for Lightweight Reading Compliance
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.21/
Keim, Greg and Littman, Michael
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
167--172
We explore a novel approach to reading compliance, leveraging large language models to select inline challenges that discourage skipping during reading. This lightweight {\textquoteleft}testing' is accomplished through automatically identified context clozes where the reader must supply a missing word that would be hard to guess if earlier material was skipped. Clozes are selected by scoring each word by the contrast between its likelihood with and without prior sentences as context, preferring to leave gaps where this contrast is high. We report results of an initial human-participant test that indicates this method can find clozes that have this property.
null
null
10.18653/v1/2022.bea-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,525
inproceedings
laarmann-quante-etal-2022-meet
{\textquoteleft}Meet me at the ribary' {--} Acceptability of spelling variants in free-text answers to listening comprehension prompts
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.22/
Laarmann-Quante, Ronja and Schwarz, Leska and Horbach, Andrea and Zesch, Torsten
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
173--182
When listening comprehension is tested as a free-text production task, a challenge for scoring the answers is the resulting wide range of spelling variants. When judging whether a variant is acceptable or not, human raters perform a complex holistic decision. In this paper, we present a corpus study in which we analyze human acceptability decisions in a high stakes test for German. We show that for human experts, spelling variants are harder to score consistently than other answer variants. Furthermore, we examine how the decision can be operationalized using features that could be applied by an automatic scoring system. We show that simple measures like edit distance and phonetic similarity between a given answer and the target answer can model the human acceptability decisions with the same inter-annotator agreement as humans, and discuss implications of the remaining inconsistencies.
null
null
10.18653/v1/2022.bea-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,526
inproceedings
ahumada-etal-2022-educational
Educational Tools for Mapuzugun
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.23/
Ahumada, Cristian and Gutierrez, Claudio and Anastasopoulos, Antonios
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
183--196
Mapuzugun is the language of the Mapuche people. Due to political and historical reasons, its number of speakers has decreased and the language has been excluded from the educational system in Chile and Argentina. For this reason, it is very important to support the revitalization of the Mapuzugun in all spaces and media of society. In this work we present a tool towards supporting educational activities of Mapuzugun, tailored to the characteristics of the language. The tool consists of three parts: design and development of an orthography detector and converter; a morphological analyzer; and an informal translator. We also present a case study with Mapuzugun students showing promising results. Short abstract in Mapuzugun: T{\"ufachi k{\"uzaw pegelfi ki{\~ne zugun k{\"uzawpey{\"um kelluaetew pu mapuzugun chillkatufe kimal kizu ta{\~ni zugun.
null
null
10.18653/v1/2022.bea-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,527
inproceedings
north-etal-2022-evaluation
An Evaluation of Binary Comparative Lexical Complexity Models
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.24/
North, Kai and Zampieri, Marcos and Shardlow, Matthew
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
197--203
Identifying complex words in texts is an important first step in text simplification (TS) systems. In this paper, we investigate the performance of binary comparative Lexical Complexity Prediction (LCP) models applied to a popular benchmark dataset {---} the CompLex 2.0 dataset used in SemEval-2021 Task 1. With the data from CompLex 2.0, we create a new dataset contain 1,940 sentences referred to as CompLex-BC. Using CompLex-BC, we train multiple models to differentiate which of two target words is more or less complex in the same sentence. A linear SVM model achieved the best performance in our experiments with an F1-score of 0.86.
null
null
10.18653/v1/2022.bea-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,528
inproceedings
fiacco-etal-2022-toward
Toward Automatic Discourse Parsing of Student Writing Motivated by Neural Interpretation
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.25/
Fiacco, James and Jiang, Shiyan and Adamson, David and Ros{\'e}, Carolyn
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
204--215
Providing effective automatic essay feedback is necessary for offering writing instruction at a massive scale. In particular, feedback for promoting coherent flow of ideas in essays is critical. In this paper we propose a state-of-the-art method for automated analysis of structure and flow of writing, referred to as Rhetorical Structure Theory (RST) parsing. In so doing, we lay a foundation for a generalizable approach to automated writing feedback related to structure and flow. We address challenges in automated rhetorical analysis when applied to student writing and evaluate our novel RST parser model on both a recent student writing dataset and a standard benchmark RST parsing dataset.
null
null
10.18653/v1/2022.bea-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,529
inproceedings
rathod-etal-2022-educational
Educational Multi-Question Generation for Reading Comprehension
Kochmar, Ekaterina and Burstein, Jill and Horbach, Andrea and Laarmann-Quante, Ronja and Madnani, Nitin and Tack, Ana{\"is and Yaneva, Victoria and Yuan, Zheng and Zesch, Torsten
jul
2022
Seattle, Washington
Association for Computational Linguistics
https://aclanthology.org/2022.bea-1.26/
Rathod, Manav and Tu, Tony and Stasaski, Katherine
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
216--223
Automated question generation has made great advances with the help of large NLP generation models. However, typically only one question is generated for each intended answer. We propose a new task, Multi-Question Generation, aimed at generating multiple semantically similar but lexically diverse questions assessing the same concept. We develop an evaluation framework based on desirable qualities of the resulting questions. Results comparing multiple question generation approaches in the two-question generation condition show a trade-off between question answerability and lexical diversity between the two questions. We also report preliminary results from sampling multiple questions from our model, to explore generating more than two questions. Our task can be used to further explore the educational impact of showing multiple distinct question wordings to students.
null
null
10.18653/v1/2022.bea-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
29,530