entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
laskar-etal-2022-improving
Improving Named Entity Recognition in Telephone Conversations via Effective Active Learning with Human in the Loop
Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.dash-1.12/
Laskar, Md Tahmid Rahman and Chen, Cheng and Fu, Xue-yong and Bhushan Tn, Shashi
Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances)
88--93
Telephone transcription data can be very noisy due to speech recognition errors, disfluencies, etc. Not only that annotating such data is very challenging for the annotators, but also such data may have lots of annotation errors even after the annotation job is completed, resulting in a very poor model performance. In this paper, we present an active learning framework that leverages human in the loop learning to identify data samples from the annotated dataset for re-annotation that are more likely to contain annotation errors. In this way, we largely reduce the need for data re-annotation for the whole dataset. We conduct extensive experiments with our proposed approach for Named Entity Recognition and observe that by re-annotating only about 6{\%} training instances out of the whole dataset, the F1 score for a certain entity type can be significantly improved by about 25{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,323
inproceedings
pacheco-etal-2022-interactively
Interactively Uncovering Latent Arguments in Social Media Platforms: A Case Study on the Covid-19 Vaccine Debate
Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.dash-1.13/
Pacheco, Maria Leonor and Islam, Tunazzina and Ungar, Lyle and Yin, Ming and Goldwasser, Dan
Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances)
94--111
Automated methods for analyzing public opinion have grown in popularity with the proliferation of social media. While supervised methods can be very good at classifying text, the dynamic nature of social media discourse results in a moving target for supervised learning. Meanwhile, traditional unsupervised techniques for extracting themes from textual repositories, such as topic models, can result in incorrect outputs that are unusable to domain experts. For this reason, a non-trivial amount of research on social media discourse still relies on manual coding techniques. In this paper, we present an interactive, humans-in-the-loop framework that strikes a balance between unsupervised techniques and manual coding for extracting latent arguments from social media discussions. We use the COVID-19 vaccination debate as a case study, and show that our methodology can be used to obtain a more accurate, interpretable set of arguments when compared to traditional topic models. We do this at a relatively low manual cost, as 3 experts take approximately 2 hours to code close to 100k tweets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,324
inproceedings
wan-etal-2022-user
User or Labor: An Interaction Framework for Human-Machine Relationships in {NLP}
Dragut, Eduard and Li, Yunyao and Popa, Lucian and Vucetic, Slobodan and Srivastava, Shashank
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.dash-1.14/
Wan, Ruyuan and Etori, Naome and Badillo-urquiola, Karla and Kang, Dongyeop
Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances)
112--121
The bridging research between Human-Computer Interaction and Natural Language Processing is developing quickly these years. However, there is still a lack of formative guidelines to understand the human-machine interaction in the NLP loop. When researchers crossing the two fields talk about humans, they may imply a user or labor. Regarding a human as a user, the human is in control, and the machine is used as a tool to achieve the human`s goals. Considering a human as a laborer, the machine is in control, and the human is used as a resource to achieve the machine`s goals. Through a systematic literature review and thematic analysis, we present an interaction framework for understanding human-machine relationships in NLP. In the framework, we propose four types of human-machine interactions: Human-Teacher and Machine-Learner, Machine-Leading, Human-Leading, and Human-Machine Collaborators. Our analysis shows that the type of interaction is not fixed but can change across tasks as the relationship between the human and the machine develops. We also discuss the implications of this framework for the future of NLP and human-machine relationships.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,325
inproceedings
das-paik-2022-resilience
Resilience of Named Entity Recognition Models under Adversarial Attack
Bartolo, Max and Kirk, Hannah and Rodriguez, Pedro and Margatina, Katerina and Thrush, Tristan and Jia, Robin and Stenetorp, Pontus and Williams, Adina and Kiela, Douwe
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.dadc-1.1/
Das, Sudeshna and Paik, Jiaul
Proceedings of the First Workshop on Dynamic Adversarial Data Collection
1--6
Named entity recognition (NER) is a popular language processing task with wide applications. Progress in NER has been noteworthy, as evidenced by the F1 scores obtained on standard datasets. In practice, however, the end-user uses an NER model on their dataset out-of-the-box, on text that may not be pristine. In this paper we present four model-agnostic adversarial attacks to gauge the resilience of NER models in such scenarios. Our experiments on four state-of-the-art NER methods with five English datasets suggest that the NER models are over-reliant on case information and do not utilise contextual information well. As such, they are highly susceptible to adversarial attacks based on these features.
null
null
10.18653/v1/2022.dadc-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,327
inproceedings
ding-etal-2022-posthoc
Posthoc Verification and the Fallibility of the Ground Truth
Bartolo, Max and Kirk, Hannah and Rodriguez, Pedro and Margatina, Katerina and Thrush, Tristan and Jia, Robin and Stenetorp, Pontus and Williams, Adina and Kiela, Douwe
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.dadc-1.3/
Ding, Yifan and Botzer, Nicholas and Weninger, Tim
Proceedings of the First Workshop on Dynamic Adversarial Data Collection
23--29
Classifiers commonly make use of pre-annotated datasets, wherein a model is evaluated by pre-defined metrics on a held-out test set typically made of human-annotated labels. Metrics used in these evaluations are tied to the availability of well-defined ground truth labels, and these metrics typically do not allow for inexact matches. These noisy ground truth labels and strict evaluation metrics may compromise the validity and realism of evaluation results. In the present work, we conduct a systematic label verification experiment on the entity linking (EL) task. Specifically, we ask annotators to verify the correctness of annotations after the fact (, posthoc). Compared to pre-annotation evaluation, state-of-the-art EL models performed extremely well according to the posthoc evaluation methodology. Surprisingly, we find predictions from EL models had a similar or higher verification rate than the ground truth. We conclude with a discussion on these findings and recommendations for future evaluations. The source code, raw results, and evaluation scripts are publicly available via the MIT license at \url{https://github.com/yifding/e2e_EL_evaluate}
null
null
10.18653/v1/2022.dadc-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,329
inproceedings
li-michael-2022-overconfidence
Overconfidence in the Face of Ambiguity with Adversarial Data
Bartolo, Max and Kirk, Hannah and Rodriguez, Pedro and Margatina, Katerina and Thrush, Tristan and Jia, Robin and Stenetorp, Pontus and Williams, Adina and Kiela, Douwe
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.dadc-1.4/
Li, Margaret and Michael, Julian
Proceedings of the First Workshop on Dynamic Adversarial Data Collection
30--40
Adversarial data collection has shown promise as a method for building models which are more robust to the spurious correlations that generally appear in naturalistic data. However, adversarially-collected data may itself be subject to biases, particularly with regard to ambiguous or arguable labeling judgments. Searching for examples where an annotator disagrees with a model might over-sample ambiguous inputs, and filtering the results for high inter-annotator agreement may under-sample them. In either case, training a model on such data may produce predictable and unwanted biases. In this work, we investigate whether models trained on adversarially-collected data are miscalibrated with respect to the ambiguity of their inputs. Using Natural Language Inference models as a testbed, we find no clear difference in accuracy between naturalistically and adversarially trained models, but our model trained only on adversarially-sourced data is considerably more overconfident of its predictions and demonstrates worse calibration, especially on ambiguous inputs. This effect is mitigated, however, when naturalistic and adversarial training data are combined.
null
null
10.18653/v1/2022.dadc-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,330
inproceedings
kovatchev-etal-2022-longhorns
longhorns at {DADC} 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks.
Bartolo, Max and Kirk, Hannah and Rodriguez, Pedro and Margatina, Katerina and Thrush, Tristan and Jia, Robin and Stenetorp, Pontus and Williams, Adina and Kiela, Douwe
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.dadc-1.5/
Kovatchev, Venelin and Chatterjee, Trina and Govindarajan, Venkata S and Chen, Jifan and Choi, Eunsol and Chronis, Gabriella and Das, Anubrata and Erk, Katrin and Lease, Matthew and Li, Junyi Jessy and Wu, Yating and Mahowald, Kyle
Proceedings of the First Workshop on Dynamic Adversarial Data Collection
41--52
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability. Here, we describe the approach of the team {\textquotedblleft}longhorns{\textquotedblright} on Task 1 of the The First Workshop on Dynamic Adversarial Data Collection (DADC), which asked teams to manually fool a model on an Extractive Question Answering task. Our team finished first (pending validation), with a model error rate of 62{\%}. We advocate for a systematic, linguistically informed approach to formulating adversarial questions, and we describe the results of our pilot experiments, as well as our official submission.
null
null
10.18653/v1/2022.dadc-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,331
inproceedings
romero-diaz-etal-2022-collecting
Collecting high-quality adversarial data for machine reading comprehension tasks with humans and models in the loop
Bartolo, Max and Kirk, Hannah and Rodriguez, Pedro and Margatina, Katerina and Thrush, Tristan and Jia, Robin and Stenetorp, Pontus and Williams, Adina and Kiela, Douwe
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.dadc-1.6/
Romero Diaz, Damian Y. and Anio{\l}, Magdalena and Culnan, John
Proceedings of the First Workshop on Dynamic Adversarial Data Collection
53--60
We present our experience as annotators in the creation of high-quality, adversarial machine-reading-comprehension data for extractive QA for Task 1 of the First Workshop on Dynamic Adversarial Data Collection (DADC). DADC is an emergent data collection paradigm with both models and humans in the loop. We set up a quasi-experimental annotation design and perform quantitative analyses across groups with different numbers of annotators focusing on successful adversarial attacks, cost analysis, and annotator confidence correlation. We further perform a qualitative analysis of our perceived difficulty of the task given the different topics of the passages in our dataset and conclude with recommendations and suggestions that might be of value to people working on future DADC tasks and related annotation interfaces.
null
null
10.18653/v1/2022.dadc-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,332
inproceedings
phang-etal-2022-adversarially
Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair
Bartolo, Max and Kirk, Hannah and Rodriguez, Pedro and Margatina, Katerina and Thrush, Tristan and Jia, Robin and Stenetorp, Pontus and Williams, Adina and Kiela, Douwe
jul
2022
Seattle, WA
Association for Computational Linguistics
https://aclanthology.org/2022.dadc-1.8/
Phang, Jason and Chen, Angelica and Huang, William and Bowman, Samuel R.
Proceedings of the First Workshop on Dynamic Adversarial Data Collection
62--62
Large language models increasingly saturate existing task benchmarks, in some cases outperforming humans, leaving little headroom with which to measure further progress. Adversarial dataset creation, which builds datasets using examples that a target system outputs incorrect predictions for, has been proposed as a strategy to construct more challenging datasets, avoiding the more serious challenge of building more precise benchmarks by conventional means. In this work, we study the impact of applying three common approaches for adversarial dataset creation: (1) filtering out easy examples (AFLite), (2) perturbing examples (TextFooler), and (3) model-in-the-loop data collection (ANLI and AdversarialQA), across 18 different adversary models. We find that all three methods can produce more challenging datasets, with stronger adversary models lowering the performance of evaluated models more. However, the resulting ranking of the evaluated models can also be unstable and highly sensitive to the choice of adversary model. Moreover, we find that AFLite oversamples examples with low annotator agreement, meaning that model comparisons hinge on the examples that are most contentious for humans. We recommend that researchers tread carefully when using adversarial methods for building evaluation datasets.
null
null
10.18653/v1/2022.dadc-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,334
inproceedings
aglionby-teufel-2022-identifying
Identifying relevant common sense information in knowledge graphs
Bosselut, Antoine and Li, Xiang and Lin, Bill Yuchen and Shwartz, Vered and Majumder, Bodhisattwa Prasad and Lal, Yash Kumar and Rudinger, Rachel and Ren, Xiang and Tandon, Niket and Zouhar, Vil{\'e}m
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.csrr-1.1/
Aglionby, Guy and Teufel, Simone
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)
1--7
Knowledge graphs are often used to store common sense information that is useful for various tasks. However, the extraction of contextually-relevant knowledge is an unsolved problem, and current approaches are relatively simple. Here we introduce a triple selection method based on a ranking model and find that it improves question answering accuracy over existing methods. We additionally investigate methods to ensure that extracted triples form a connected graph. Graph connectivity is important for model interpretability, as paths are frequently used as explanations for the reasoning that connects question and answer.
null
null
10.18653/v1/2022.csrr-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,336
inproceedings
koto-etal-2022-cloze
Cloze Evaluation for Deeper Understanding of Commonsense Stories in {I}ndonesian
Bosselut, Antoine and Li, Xiang and Lin, Bill Yuchen and Shwartz, Vered and Majumder, Bodhisattwa Prasad and Lal, Yash Kumar and Rudinger, Rachel and Ren, Xiang and Tandon, Niket and Zouhar, Vil{\'e}m
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.csrr-1.2/
Koto, Fajri and Baldwin, Timothy and Lau, Jey Han
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)
8--16
Story comprehension that involves complex causal and temporal relations is a critical task in NLP, but previous studies have focused predominantly on English, leaving open the question of how the findings generalize to other languages, such as Indonesian. In this paper, we follow the Story Cloze Test framework of Mostafazadeh et al. (2016) in evaluating story understanding in Indonesian, by constructing a four-sentence story with one correct ending and one incorrect ending. To investigate commonsense knowledge acquisition in language models, we experimented with: (1) a classification task to predict the correct ending; and (2) a generation task to complete the story with a single sentence. We investigate these tasks in two settings: (i) monolingual training and ii) zero-shot cross-lingual transfer between Indonesian and English.
null
null
10.18653/v1/2022.csrr-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,337
inproceedings
cong-2022-psycholinguistic
Psycholinguistic Diagnosis of Language Models' Commonsense Reasoning
Bosselut, Antoine and Li, Xiang and Lin, Bill Yuchen and Shwartz, Vered and Majumder, Bodhisattwa Prasad and Lal, Yash Kumar and Rudinger, Rachel and Ren, Xiang and Tandon, Niket and Zouhar, Vil{\'e}m
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.csrr-1.3/
Cong, Yan
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)
17--22
Neural language models have attracted a lot of attention in the past few years. More and more researchers are getting intrigued by how language models encode commonsense, specifically what kind of commonsense they understand, and why they do. This paper analyzed neural language models' understanding of commonsense pragmatics (i.e., implied meanings) through human behavioral and neurophysiological data. These psycholinguistic tests are designed to draw conclusions based on predictive responses in context, making them very well suited to test word-prediction models such as BERT in natural settings. They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses. This paper adopted psycholinguistic datasets to probe language models' commonsense reasoning. Findings suggest that GPT-3`s performance was mostly at chance in the psycholinguistic tasks. We also showed that DistillBERT had some understanding of the (implied) intent that`s shared among most people. Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions. Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning.
null
null
10.18653/v1/2022.csrr-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,338
inproceedings
wan-etal-2022-bridging
Bridging the Gap between Recognition-level Pre-training and Commonsensical Vision-language Tasks
Bosselut, Antoine and Li, Xiang and Lin, Bill Yuchen and Shwartz, Vered and Majumder, Bodhisattwa Prasad and Lal, Yash Kumar and Rudinger, Rachel and Ren, Xiang and Tandon, Niket and Zouhar, Vil{\'e}m
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.csrr-1.4/
Wan, Yue and Ma, Yueen and You, Haoxuan and Wang, Zhecan and Chang, Shih-Fu
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)
23--35
Large-scale visual-linguistic pre-training aims to capture the generic representations from multimodal features, which are essential for downstream vision-language tasks. Existing methods mostly focus on learning the semantic connections between visual objects and linguistic content, which tend to be recognitionlevel information and may not be sufficient for commonsensical reasoning tasks like VCR. In this paper, we propose a novel commonsensical vision-language pre-training framework to bridge the gap. We first augment the conventional image-caption pre-training datasets with commonsense inferences from a visuallinguistic GPT-2. To pre-train models on image, caption and commonsense inferences together, we propose two new tasks: masked commonsense modeling (MCM) and commonsense type prediction (CTP). To reduce the shortcut effect between captions and commonsense inferences, we further introduce the domain-wise adaptive masking that dynamically adjusts the masking ratio. Experimental results on downstream tasks, VCR and VQA, show the improvement of our pre-training strategy over previous methods. Human evaluation also validates the relevance, informativeness, and diversity of the generated commonsense inferences. Overall, we demonstrate the potential of incorporating commonsense knowledge into the conventional recognition-level visual-linguistic pre-training.
null
null
10.18653/v1/2022.csrr-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,339
inproceedings
nguyen-razniewski-2022-materialized
Materialized Knowledge Bases from Commonsense Transformers
Bosselut, Antoine and Li, Xiang and Lin, Bill Yuchen and Shwartz, Vered and Majumder, Bodhisattwa Prasad and Lal, Yash Kumar and Rudinger, Rachel and Ren, Xiang and Tandon, Niket and Zouhar, Vil{\'e}m
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.csrr-1.5/
Nguyen, Tuan-Phong and Razniewski, Simon
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)
36--42
Starting from the COMET methodology by Bosselut et al. (2019), generating commonsense knowledge directly from pre-trained language models has recently received significant attention. Surprisingly, up to now no materialized resource of commonsense knowledge generated this way is publicly available. This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall. Furthermore, we identify common problem cases, and outline use cases enabled by materialized resources. We posit that the availability of these resources is important for the advancement of the field, as it enables an off-the-shelf-use of the resulting knowledge, as well as further analyses on its strengths and weaknesses.
null
null
10.18653/v1/2022.csrr-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,340
inproceedings
hosseini-etal-2022-knowledge
Knowledge-Augmented Language Models for Cause-Effect Relation Classification
Bosselut, Antoine and Li, Xiang and Lin, Bill Yuchen and Shwartz, Vered and Majumder, Bodhisattwa Prasad and Lal, Yash Kumar and Rudinger, Rachel and Ren, Xiang and Tandon, Niket and Zouhar, Vil{\'e}m
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.csrr-1.6/
Hosseini, Pedram and Broniatowski, David A. and Diab, Mona
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)
43--48
Previous studies have shown the efficacy of knowledge augmentation methods in pretrained language models. However, these methods behave differently across domains and downstream tasks. In this work, we investigate the augmentation of pretrained language models with knowledge graph data in the cause-effect relation classification and commonsense causal reasoning tasks. After automatically verbalizing triples in ATOMIC2020, a wide coverage commonsense reasoning knowledge graph, we continually pretrain BERT and evaluate the resulting model on cause-effect pair classification and answering commonsense causal reasoning questions. Our results show that a continually pretrained language model augmented with commonsense reasoning knowledge outperforms our baselines on two commonsense causal reasoning benchmarks, COPA and BCOPA-CE, and a Temporal and Causal Reasoning (TCR) dataset, without additional improvement in model architecture or using quality-enhanced data for fine-tuning.
null
null
10.18653/v1/2022.csrr-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,341
inproceedings
rajagopal-etal-2022-curie
{CURIE}: An Iterative Querying Approach for Reasoning About Situations
Bosselut, Antoine and Li, Xiang and Lin, Bill Yuchen and Shwartz, Vered and Majumder, Bodhisattwa Prasad and Lal, Yash Kumar and Rudinger, Rachel and Ren, Xiang and Tandon, Niket and Zouhar, Vil{\'e}m
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.csrr-1.7/
Rajagopal, Dheeraj and Madaan, Aman and Tandon, Niket and Yang, Yiming and Prabhumoye, Shrimai and Ravichander, Abhilasha and Clark, Peter and Hovy, Eduard H
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)
49--63
Predicting the effects of unexpected situations is an important reasoning task, e.g., would cloudy skies help or hinder plant growth? Given a context, the goal of such situational reasoning is to elicit the consequences of a new situation (st) that arises in that context. We propose CURIE, a method to iteratively build a graph of relevant consequences explicitly in a structured situational graph (st graph) using natural language queries over a finetuned language model. Across multiple domains, CURIE generates st graphs that humans find relevant and meaningful in eliciting the consequences of a new situation (75{\%} of the graphs were judged correct by humans). We present a case study of a situation reasoning end task (WIQA-QA), where simply augmenting their input with st graphs improves accuracy by 3 points. We show that these improvements mainly come from a hard subset of the data, that requires background knowledge and multi-hop reasoning.
null
null
10.18653/v1/2022.csrr-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,342
inproceedings
polignano-etal-2022-nlp
An {NLP} Approach for the Analysis of Global Reporting Initiative Indexes from Corporate Sustainability Reports
Wan, Mingyu and Huang, Chu-Ren
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.csrnlp-1.1/
Polignano, Marco and Bellantuono, Nicola and Lagrasta, Francesco Paolo and Caputo, Sergio and Pontrandolfo, Pierpaolo and Semeraro, Giovanni
Proceedings of the First Computing Social Responsibility Workshop within the 13th Language Resources and Evaluation Conference
1--8
Sustainability reporting has become an annual requirement in many countries and for certain types of companies. Sustainability reports inform stakeholders about companies' commitment to sustainable development and their economic, social, and environmental sustainability practices. However, the fact that norms and standards allow a certain discretion to be adopted by drafting organizations makes such reports hardly comparable in terms of layout, disclosures, key performance indicators (KPIs), and so on. In this work, we present a system based on natural language processing and information extraction techniques to retrieve relevant information from sustainability reports, compliant with the Global Reporting Initiative Standards, written in Italian and English language. Specifically, the system is able to identify references to the various sustainability topics discussed by the reports: on which page of the document those references have been found, the context of each reference, and if it is mentioned positively or negatively. The output of the system has been then evaluated against a ground truth obtained through a manual annotation process on 134 reports. Experimental outcomes highlight the affordability of the approach for improving sustainability disclosures, accessibility, and transparency, thus empowering stakeholders to conduct further analysis and considerations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,344
inproceedings
purver-etal-2022-tracking
Tracking Changes in {ESG} Representation: Initial Investigations in {UK} Annual Reports
Wan, Mingyu and Huang, Chu-Ren
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.csrnlp-1.2/
Purver, Matthew and Martinc, Matej and Ichev, Riste and Lon{\v{c}}arski, Igor and Sitar {\v{S}}u{\v{s}}tar, Katarina and Valentin{\v{c}}i{\v{c}}, Aljo{\v{s}}a and Pollak, Senja
Proceedings of the First Computing Social Responsibility Workshop within the 13th Language Resources and Evaluation Conference
9--14
We describe initial work into analysing the language used around environmental, social and governance (ESG) issues in UK company annual reports. We collect a dataset of annual reports from UK FTSE350 companies over the years 2012-2019; separately, we define a categorized list of core ESG terms (single words and multi-word expressions) by combining existing lists with manual annotation. We then show that this list can be used to analyse the changes in ESG language in the dataset over time, via a combination of language modelling and distributional modelling via contextual word embeddings. Initial findings show that while ESG discussion in annual reports is becoming significantly more likely over time, the increase varies with category and with individual terms, and that some terms show noticeable changes in usage.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,345
inproceedings
chen-xu-2022-corpus
A Corpus-based Study of Corporate Image Represented in Corporate Social Responsibility Report: A Case Study of {C}hina Mobile and Vodafone
Wan, Mingyu and Huang, Chu-Ren
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.csrnlp-1.3/
Chen, Xing and Xu, Liang
Proceedings of the First Computing Social Responsibility Workshop within the 13th Language Resources and Evaluation Conference
15--23
By examination of the high-frequency nouns, verbs, and keywords, the present study probes into the similarities and differences of corporate images represented in Corporate Social Responsibility (CSR) reports of China Mobile and Vodafone. The results suggest that: 1) both China Mobile and Vodafone prefer using some positive words, like improve, support and service to shape a positive, approachable and easy-going corporate image, and an image of prioritizing the environmental sustainability and the well-being of people; 2) CSR reports of China Mobile contain the keywords poverty and alleviation, which means China Mobile is pragmatic, collaborative and active to assume the responsibility for social events; 3) CSR reports of Vodafone contain keywords like privacy, women and global as well as some other countries, which shows Vodafone is enterprising, globalized and attentive to the development of women; 4) these differences might be related to the ideology and social culture of Chinese and British companies. This study may contribute to understanding the function of CSR report and offer helpful implications for broadening the research of corporate image.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,346
inproceedings
chen-etal-2022-framing
Framing Legitimacy in {CSR}: A Corpus of {C}hinese and {A}merican Petroleum Company {CSR} Reports and Preliminary Analysis
Wan, Mingyu and Huang, Chu-Ren
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.csrnlp-1.4/
Chen, Jieyu and Ahrens, Kathleen and Huang, Chu-Ren
Proceedings of the First Computing Social Responsibility Workshop within the 13th Language Resources and Evaluation Conference
24--34
We examine how Chinese and American oil companies use the gain- and loss-framed BUILDING source domain to legitimize their business in Corporate Social Responsibility (CSR) reports. Gain and loss frames can create legitimacy because they can ethically position an issue. We will focus on oil companies in China and the U.S. because different socio-cultural contexts in these two countries can potentially lead to different legitimation strategies in CSR reports, which can shed light on differences in Chinese and American CSR. All of the oil companies in our data are on the Fortune 500 list (2020). The results showed that Chinese oil companies used BUILDING metaphors more frequently than American oil companies. The most frequent keyword in Chinese CSRs {\textquotedblleft}build{\textquotedblright} highlights environmental achievements in compliance with governments' policies. American CSRs often used the metaphorical verb {\textquotedblleft}support{\textquotedblright} to show their alignment with environmental policies and the interests of different stakeholders. The BUILDING source domain was used more often as gain frames in both Chinese and American CSR reports to show how oil companies create benefits for different stakeholders.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,347
inproceedings
gabryszak-thomas-2022-mobasa
{M}ob{ASA}: Corpus for Aspect-based Sentiment Analysis and Social Inclusion in the Mobility Domain
Wan, Mingyu and Huang, Chu-Ren
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.csrnlp-1.5/
Gabryszak, Aleksandra and Thomas, Philippe
Proceedings of the First Computing Social Responsibility Workshop within the 13th Language Resources and Evaluation Conference
35--39
In this paper we show how aspect-based sentiment analysis might help public transport companies to improve their social responsibility for accessible travel. We present MobASA: a novel German-language corpus of tweets annotated with their relevance for public transportation, and with sentiment towards aspects related to barrier-free travel. We identified and labeled topics important for passengers limited in their mobility due to disability, age, or when travelling with young children. The data can be used to identify hurdles and improve travel planning for vulnerable passengers, as well as to monitor a perception of transportation businesses regarding the social inclusion of all passengers. The data is publicly available under: \url{https://github.com/DFKI-NLP/sim3s-corpus}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,348
inproceedings
pilankar-etal-2022-detecting
Detecting Violation of Human Rights via Social Media
Wan, Mingyu and Huang, Chu-Ren
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.csrnlp-1.6/
Pilankar, Yash and Haque, Rejwanul and Hasanuzzaman, Mohammed and Stynes, Paul and Pathak, Pramod
Proceedings of the First Computing Social Responsibility Workshop within the 13th Language Resources and Evaluation Conference
40--45
Social media is not just meant for entertainment, it provides platforms for sharing information, news, facts and events. In the digital age, activists and numerous users are seen to be vocal regarding human rights and their violations in social media. However, their voices do not often reach to the targeted audience and concerned human rights organization. In this work, we aimed at detecting factual posts in social media about violation of human rights in any part of the world. The end product of this research can be seen as an useful asset for different peacekeeping organizations who could exploit it to monitor real-time circumstances about any incident in relation to violation of human rights. We chose one of the popular micro-blogging websites, Twitter, for our investigation. We used supervised learning algorithms in order to build human rights violation identification (HRVI) models which are able to identify Tweets in relation to incidents of human right violation. For this, we had to manually create a data set, which is one of the contributions of this research. We found that our classification models that were trained on this gold-standard dataset performed excellently in classifying factual Tweets about human rights violation, achieving an accuracy of upto 93{\%} on hold-out test set.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,349
inproceedings
lu-etal-2022-inclusion
Inclusion in {CSR} Reports: The Lens from a Data-Driven Machine Learning Model
Wan, Mingyu and Huang, Chu-Ren
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.csrnlp-1.7/
Lu, Lu and Gu, Jinghang and Huang, Chu-Ren
Proceedings of the First Computing Social Responsibility Workshop within the 13th Language Resources and Evaluation Conference
46--51
Inclusion, as one of the foundations in the diversity, equity, and inclusion initiative, concerns the degree of being treated as an ingroup member in a workplace. Despite of its importance in a corporate`s ecosystem, the inclusion strategies and its performance are not adequately addressed in corporate social responsibility (CSR) and CSR reporting. This study proposes a machine learning and big data-based model to examine inclusion through the use of stereotype content in actual language use. The distribution of the stereotype content in general corpora of a given society is utilized as a baseline, with which texts about corporate texts are compared. This study not only propose a model to identify and classify inclusion in language use, but also provides insights to measure and track progress by including inclusion in CSR reports as a strategy to build an inclusive corporate team.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,350
inproceedings
auti-etal-2022-towards
Towards Classification of Legal Pharmaceutical Text using {GAN}-{BERT}
Wan, Mingyu and Huang, Chu-Ren
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.csrnlp-1.8/
Auti, Tapan and Sarkar, Rajdeep and Stearns, Bernardo and Ojha, Atul Kr. and Paul, Arindam and Comerford, Michaela and Megaro, Jay and Mariano, John and Herard, Vall and McCrae, John P.
Proceedings of the First Computing Social Responsibility Workshop within the 13th Language Resources and Evaluation Conference
52--57
Pharmaceutical text classification is an important area of research for commercial and research institutions working in the pharmaceutical domain. Addressing this task is challenging due to the need of expert verified labelled data which can be expensive and time consuming to obtain. Towards this end, we leverage predictive coding methods for the task as they have been shown to generalise well for sentence classification. Specifically, we utilise GAN-BERT architecture to classify pharmaceutical texts. To capture the domain specificity, we propose to utilise the BioBERT model as our BERT model in the GAN-BERT framework. We conduct extensive evaluation to show the efficacy of our approach over baselines on multiple metrics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,351
inproceedings
revi-etal-2022-idn
{IDN}-Sum: A New Dataset for Interactive Digital Narrative Extractive Text Summarisation
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.1/
Revi, Ashwathy T. and Middleton, Stuart E. and Millard, David E.
Proceedings of The Workshop on Automatic Summarization for Creative Writing
1--12
Summarizing Interactive Digital Narratives (IDN) presents some unique challenges to existing text summarization models especially around capturing interactive elements in addition to important plot points. In this paper, we describe the first IDN dataset (IDN-Sum) designed specifically for training and testing IDN text summarization algorithms. Our dataset is generated using random playthroughs of 8 IDN episodes, taken from 2 different IDN games, and consists of 10,000 documents. Playthrough documents are annotated through automatic alignment with fan-sourced summaries using a commonly used alignment algorithm. We also report and discuss results from experiments applying common baseline extractive text summarization algorithms to this dataset. Qualitative analysis of the results reveals shortcomings in common annotation approaches and evaluation methods when applied to narrative and interactive narrative datasets. The dataset is released as open source for future researchers to train and test their own approaches for IDN text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,353
inproceedings
chatterjee-etal-2022-summarization
Summarization of Long Input Texts Using Multi-Layer Neural Network
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.2/
Chatterjee, Niladri and Khatri, Aadyant and Agarwal, Raksha
Proceedings of The Workshop on Automatic Summarization for Creative Writing
13--18
This paper describes the architecture of a novel Multi-Layer Long Text Summarizer (MLLTS) system proposed for the task of creative writing summarization. Typically, such writings are very long, often spanning over 100 pages. Summarizers available online are either not equipped enough to handle long texts, or even if they are able to generate the summary, the quality is poor. The proposed MLLTS system handles the difficulty by splitting the text into several parts. Each part is then subjected to different existing summarizers. A multilayer network is constructed by establishing linkages between the different parts. During training phases, several hyperparameters are fine-tuned. The system achieved very good ROUGE scores on the test data supplied for the contest.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,354
inproceedings
kashyap-2022-coling
{COLING} 2022 Shared Task: {LED} Finteuning and Recursive Summary Generation for Automatic Summarization of Chapters from Novels
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.3/
Kashyap, Prerna
Proceedings of The Workshop on Automatic Summarization for Creative Writing
19--23
We present the results of the Workshop on Automatic Summarization for Creative Writing 2022 Shared Task on summarization of chapters from novels. In this task, we finetune a pretrained transformer model for long documents called LongformerEncoderDecoder which supports seq2seq tasks for long inputs which can be up to 16k tokens in length. We use the Booksum dataset for longform narrative summarization for training and validation, which maps chapters from novels, plays and stories to highly abstractive human written summaries. We use a summary of summaries approach to generate the final summaries for the blind test set, in which we recursively divide the text into paragraphs, summarize them, concatenate all resultant summaries and repeat this process until either a specified summary length is reached or there is no significant change in summary length in consecutive iterations. Our best model achieves a ROUGE-1 F-1 score of 29.75, a ROUGE-2 F-1 score of 7.89 and a BERT F-1 score of 54.10 on the shared task blind test dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,355
inproceedings
kumar-rosa-2022-team
{TEAM} {UFAL} @ {C}reative{S}umm 2022: {BART} and {S}am{S}um based few-shot approach for creative Summarization
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.4/
Kumar, Rishu and Rosa, Rudolf
Proceedings of The Workshop on Automatic Summarization for Creative Writing
24--28
This system description paper details TEAM UFAL`s approach for the SummScreen, TVMegasite subtask of the CreativeSumm shared task. The subtask deals with creating summaries for dialogues from TV Soap operas. We utilized BART based pre-trained model fine-tuned on SamSum dialouge summarization dataset. Few examples from AutoMin dataset and the dataset provided by the organizers were also inserted into the data as a few-shot learning objective. The additional data was manually broken into chunks based on different boundaries in summary and the dialogue file. For inference we choose a similar strategy as the top-performing team at AutoMin 2021, where the data is split into chunks, either on [SCENE{\_}CHANGE] or exceeding a pre-defined token length, to accommodate the maximum token possible in the pre-trained model for one example. The final training strategy was chosen based on how natural the responses looked instead of how well the model performed on an automated evaluation metrics such as ROGUE.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,356
inproceedings
kees-etal-2022-long
Long Input Dialogue Summarization with Sketch Supervision for Summarization of Primetime Television Transcripts
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.5/
Kees, Nataliia and Nguyen, Thien and Eder, Tobias and Groh, Georg
Proceedings of The Workshop on Automatic Summarization for Creative Writing
29--35
This paper presents our entry to the CreativeSumm 2022 shared task. Specifically tackling the problem of prime-time television screenplay summarization based on the SummScreen Forever Dreaming dataset. Our approach utilizes extended Longformers combined with sketch supervision including categories specifically for scene descriptions. Our system was able to produce the shortest summaries out of all submissions. While some problems with factual consistency still remain, the system was scoring highest among competitors in the ROUGE and BERTScore evaluation categories.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,357
inproceedings
hua-etal-2022-amrtvsumm
{AMRTVS}umm: {AMR}-augmented Hierarchical Network for {TV} Transcript Summarization
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.6/
Hua, Yilun and Deng, Zhaoyuan and Xu, Zhijie
Proceedings of The Workshop on Automatic Summarization for Creative Writing
36--43
This paper describes our AMRTVSumm system for the SummScreen datasets in the Automatic Summarization for Creative Writing shared task (Creative-Summ 2022). In order to capture the complicated entity interactions and dialogue structures in transcripts of TV series, we introduce a new Abstract Meaning Representation (AMR) (Banarescu et al., 2013), particularly designed to represent individual scenes in an episode. We also propose a new cross-level cross-attention mechanism to incorporate these scene AMRs into a hierarchical encoder-decoder baseline. On both the ForeverDreaming and TVMegaSite datasets of SummScreen, our system consistently outperforms the hierarchical transformer baseline. Compared with the state-of-the-art DialogLM (Zhong et al., 2021), our system still has a lower performance primarily because it is pretrained only on out-of-domain news data, unlike DialogLM, which uses extensive in-domain pretraining on dialogue and TV show data. Overall, our work suggests a promising direction to capture complicated long dialogue structures through graph representations and the need to combine graph representations with powerful pretrained language models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,358
inproceedings
upadhyay-etal-2022-automatic
Automatic Summarization for Creative Writing: {BART} based Pipeline Method for Generating Summary of Movie Scripts
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.7/
Upadhyay, Aditya and Bhavsar, Nidhir and Bhatnagar, Aakash and Singh, Muskaan and Motlicek, Petr
Proceedings of The Workshop on Automatic Summarization for Creative Writing
44--50
This paper documents our approach for the Creative-Summ 2022 shared task for Automatic Summarization of Creative Writing. For this purpose, we develop an automatic summarization pipeline where we leverage a denoising autoencoder for pretraining sequence-to-sequence models and fine-tune it on a large-scale abstractive screenplay summarization dataset to summarize TV transcripts from primetime shows. Our pipeline divides the input transcript into smaller conversational blocks, removes redundant text, summarises the conversational blocks, obtains the block-wise summaries, cleans, structures, and then integrates the summaries to create the meeting minutes. Our proposed system achieves some of the best scores across multiple metrics(lexical, semantical) in the Creative-Summ shared task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,359
inproceedings
kim-etal-2022-creativesumm
The {C}reative{S}umm 2022 Shared Task: A Two-Stage Summarization Model using Scene Attributes
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.8/
Kim, Eunchong and Yoo, Taewoo and Cho, Gunhee and Bae, Suyoung and Cheong, Yun-Gyung
Proceedings of The Workshop on Automatic Summarization for Creative Writing
51--56
In this paper, we describe our work for the CreativeSumm 2022 Shared Task, Automatic Summarization for Creative Writing. The task is to summarize movie scripts, which is challenging due to their long length and complex format. To tackle this problem, we present a two-stage summarization approach using both the abstractive and an extractive summarization methods. In addition, we preprocess the script to enhance summarization performance. The results of our experiment demonstrate that the presented approach outperforms baseline models in terms of standard summarization evaluation metrics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,360
inproceedings
pu-etal-2022-two
Two-Stage Movie Script Summarization: An Efficient Method For Low-Resource Long Document Summarization
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.9/
Liu, Dongqi and Hong, Xudong and Lin, Pin-Jie and Chang, Ernie and Demberg, Vera
Proceedings of The Workshop on Automatic Summarization for Creative Writing
57--66
The Creative Summarization Shared Task at COLING 2022 aspires to generate summaries given long-form texts from creative writing. This paper presents the system architecture and the results of our participation in the Scriptbase track that focuses on generating movie plots given movie scripts. The core innovation in our model employs a two-stage hierarchical architecture for movie script summarization. In the first stage, a heuristic extraction method is applied to extract actions and essential dialogues, which reduces the average length of input movie scripts by 66{\%} from about 24K to 8K tokens. In the second stage, a state-of-the-art encoder-decoder model, Longformer-Encoder-Decoder (LED), is trained with effective fine-tuning methods, BitFit and NoisyTune. Evaluations on the unseen test set indicate that our system outperforms both zero-shot LED baselines as well as other participants on various automatic metrics and ranks 1st in the Scriptbase track.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,361
inproceedings
agarwal-etal-2022-creativesumm
{CREATIVESUMM}: Shared Task on Automatic Summarization for Creative Writing
Mckeown, Kathleen
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.creativesumm-1.10/
Agarwal, Divyansh and Fabbri, Alexander R. and Han, Simeng and Kryscinski, Wojciech and Ladhak, Faisal and Li, Bryan and McKeown, Kathleen and Radev, Dragomir and Zhang, Tianyi and Wiseman, Sam
Proceedings of The Workshop on Automatic Summarization for Creative Writing
67--73
This paper introduces the shared task of summrizing documents in several creative domains, namely literary texts, movie scripts, and television scripts. Summarizing these creative documents requires making complex literary interpretations, as well as understanding non-trivial temporal dependencies in texts containing varied styles of plot development and narrative structure. This poses unique challenges and is yet underexplored for text summarization systems. In this shared task, we introduce four sub-tasks and their corresponding datasets, focusing on summarizing books, movie scripts, primetime television scripts, and daytime soap opera scripts. We detail the process of curating these datasets for the task, as well as the metrics used for the evaluation of the submissions. As part of the CREATIVESUMM workshop at COLING 2022, the shared task attracted 18 submissions in total. We discuss the submissions and the baselines for each sub-task in this paper, along with directions for facilitating future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,362
inproceedings
zhang-etal-2022-quantifying
Quantifying Discourse Support for Omitted Pronouns
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.1/
Zhang, Shulin and Li, Jixing and Hale, John
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
1--12
Pro-drop is commonly seen in many languages, but its discourse motivations have not been well characterized. Inspired by the topic chain theory in Chinese, this study shows how character-verb usage continuity distinguishes dropped pronouns from overt references to story characters. We model the choice to drop vs. not drop as a function of character-verb continuity. The results show that omitted subjects have higher character history-current verb continuity salience than non-omitted subjects. This is consistent with the idea that discourse coherence with a particular topic, such as a story character, indeed facilitates the omission of pronouns in languages and contexts where they are optional.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,364
inproceedings
xia-van-durme-2022-online
Online Neural Coreference Resolution with Rollback
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.2/
Xia, Patrick and Van Durme, Benjamin
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
13--21
Humans process natural language online, whether reading a document or participating in multiparty dialogue. Recent advances in neural coreference resolution have focused on offline approaches that assume the full communication history as input. This is neither realistic nor sufficient if we wish to support dialogue understanding in real-time. We benchmark two existing, offline, models and highlight their shortcomings in the online setting. We then modify these models to perform online inference and introduce rollback: a short-term mechanism to correct mistakes. We demonstrate across five English datasets the effectiveness of this approach against an offline and a naive online model in terms of latency, final document-level coreference F1, and average running F1.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,365
inproceedings
kobayashi-malon-2022-analyzing
Analyzing Coreference and Bridging in Product Reviews
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.3/
Kobayashi, Hideo and Malon, Christopher
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
22--30
Product reviews may have complex discourse including coreference and bridging relations to a main product, competing products, and interacting products. Current approaches to aspect-based sentiment analysis (ABSA) and opinion summarization largely ignore this complexity. On the other hand, existing systems for coreference and bridging were trained in a different domain. We collect mention type annotations relevant to coreference and bridging for 498 product reviews. Using these annotations, we show that a state-of-the-art factuality score fails to catch coreference errors in product reviews, and that a state-of-the-art coreference system trained on OntoNotes does not perform nearly as well on product mentions. As our dataset grows, we expect it to help ABSA and opinion summarization systems to avoid entity reference errors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,366
inproceedings
loaiciga-etal-2022-anaphoric
Anaphoric Phenomena in Situated dialog: A First Round of Annotations
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.4/
Lo{\'a}iciga, Sharid and Dobnik, Simon and Schlangen, David
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
31--37
We present a first release of 500 documents from the multimodal corpus Tell-me-more (Ilinykh et al., 2019) annotated with coreference information according to the ARRAU guidelines (Poesio et al., 2021). The corpus consists of images and short texts of five sentences. We describe the annotation process and present the adaptations to the original guidelines in order to account for the challenges of grounding the annotations to the image. 50 documents from the 500 available are annotated by two people and used to estimate inter-annotator agreement (IAA) relying on Krippendorff`s alpha.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,367
inproceedings
vadasz-2022-building
Building a Manually Annotated {H}ungarian Coreference Corpus: Workflow and Tools
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.5/
Vad{\'a}sz, No{\'e}mi
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
38--47
This paper presents the complete workflow of building a manually annotated Hungarian corpus, KorKor, with particular reference to anaphora and coreference annotation. All linguistic annotation layers were corrected manually. The corpus is freely available in two formats. The paper gives insight into the process of setting up the workflow and the challenges that have arisen.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,368
inproceedings
maehlum-etal-2022-narc
{NARC} {--} {N}orwegian Anaphora Resolution Corpus
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.6/
M{\ae}hlum, Petter and Haug, Dag and J{\o}rgensen, Tollef and K{\r{a}}sen, Andre and N{\o}klestad, Anders and R{\o}nningstad, Egil and Solberg, Per Erik and Velldal, Erik and {\O}vrelid, Lilja
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
48--60
We present the Norwegian Anaphora Resolution Corpus (NARC), the first publicly available corpus annotated with anaphoric relations between noun phrases for Norwegian. The paper describes the annotated data for 326 documents in Norwegian Bokm{\r{a}}l, together with inter-annotator agreement and discussions of relevant statistics. We also present preliminary modelling results which are comparable to existing corpora for other languages, and discuss relevant problems in relation to both modelling and the annotations themselves.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,369
inproceedings
chai-etal-2022-evaluating
Evaluating Coreference Resolvers on Community-based Question Answering: From Rule-based to State of the Art
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.7/
Chai, Haixia and Moosavi, Nafise Sadat and Gurevych, Iryna and Strube, Michael
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
61--73
Coreference resolution is a key step in natural language understanding. Developments in coreference resolution are mainly focused on improving the performance on standard datasets annotated for coreference resolution. However, coreference resolution is an intermediate step for text understanding and it is not clear how these improvements translate into downstream task performance. In this paper, we perform a thorough investigation on the impact of coreference resolvers in multiple settings of community-based question answering task, i.e., answer selection with long answers. Our settings cover multiple text domains and encompass several answer selection methods. We first inspect extrinsic evaluation of coreference resolvers on answer selection by using coreference relations to decontextualize individual sentences of candidate answers, and then annotate a subset of answers with coreference information for intrinsic evaluation. The results of our extrinsic evaluation show that while there is a significant difference between the performance of the rule-based system vs. state-of-the-art neural model on coreference resolution datasets, we do not observe a considerable difference on their impact on downstream models. Our intrinsic evaluation shows that (i) resolving coreference relations on less-formal text genres is more difficult even for trained annotators, and (ii) the values of linguistic-agnostic coreference evaluation metrics do not correlate with the impact on downstream data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,370
inproceedings
ueda-kurohashi-2022-improving
Improving Bridging Reference Resolution using Continuous Essentiality from Crowdsourcing
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.8/
Ueda, Nobuhiro and Kurohashi, Sadao
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
74--87
Bridging reference resolution is the task of finding nouns that complement essential information of another noun. The essentiality varies depending on noun combination and context and has a continuous distribution. Despite the continuous nature of essentiality, existing datasets of bridging reference have only a few coarse labels to represent the essentiality. In this work, we propose a crowdsourcing-based annotation method that considers continuous essentiality. In the crowdsourcing task, we asked workers to select both all nouns with a bridging reference relation and a noun with the highest essentiality among them. Combining these annotations, we can obtain continuous essentiality. Experimental results demonstrated that the constructed dataset improves bridging reference resolution performance. The code is available at \url{https://github.com/nobu-g/bridging-resolution}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,371
inproceedings
de-langhe-etal-2022-investigating
Investigating Cross-Document Event Coreference for {D}utch
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.9/
De Langhe, Loic and De Clercq, Orphee and Hoste, Veronique
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
88--98
In this paper we present baseline results for Event Coreference Resolution (ECR) in Dutch using gold-standard (i.e non-predicted) event mentions. A newly developed benchmark dataset allows us to properly investigate the possibility of creating ECR systems for both within and cross-document coreference. We give an overview of the state of the art for ECR in other languages, as well as a detailed overview of existing ECR resources. Afterwards, we provide a comparative report on our own dataset. We apply a significant number of approaches that have been shown to attain good results for English ECR including feature-based models, monolingual transformer language models and multilingual language models. The best results were obtained using the monolingual BERTje model. Finally, results for all models are thoroughly analysed and visualised, as to provide insight into the inner workings of ECR and long-distance semantic NLP tasks in general.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,372
inproceedings
kruijt-vossen-2022-role
The Role of Common Ground for Referential Expressions in Social Dialogues
Ogrodniczuk, Maciej and Pradhan, Sameer and Nedoluzhko, Anna and Ng, Vincent and Poesio, Massimo
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-1.10/
Kruijt, Jaap and Vossen, Piek
Proceedings of the Fifth Workshop on Computational Models of Reference, Anaphora and Coreference
99--110
In this paper, we frame the problem of co-reference resolution in dialogue as a dynamic social process in which mentions to people previously known and newly introduced are mixed when people know each other well. We restructured an existing data set for the Friends sitcom as a coreference task that evolves over time, where close friends make reference to other people either part of their common ground (inner circle) or not (outer circle). We expect that awareness of common ground is key in social dialogue in order to resolve references to the inner social circle, whereas local contextual information plays a more important role for outer circle mentions. Our analysis of these references confirms that there are differences in naming and introducing these people. We also experimented with the SpanBERT coreference system with and without fine-tuning to measure whether preceding discourse contexts matter for resolving inner and outer circle mentions. Our results show that more inner circle mentions lead to a decrease in model performance, and that fine-tuning on preceding contexts reduces false negatives for both inner and outer circle mentions but increases the false positives as well, showing that the models overfit on these contexts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,373
inproceedings
zabokrtsky-etal-2022-findings
Findings of the Shared Task on Multilingual Coreference Resolution
{\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k and Ogrodniczuk, Maciej
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-mcr.1/
{\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k and Konop{\'i}k, Miloslav and Nedoluzhko, Anna and Nov{\'a}k, Michal and Ogrodniczuk, Maciej and Popel, Martin and Pra{\v{z}}{\'a}k, Ond{\v{r}}ej and Sido, Jakub and Zeman, Daniel and Zhu, Yilun
Proceedings of the CRAC 2022 Shared Task on Multilingual Coreference Resolution
1--17
This paper presents an overview of the shared task on multilingual coreference resolution associated with the CRAC 2022 workshop. Shared task participants were supposed to develop trainable systems capable of identifying mentions and clustering them according to identity coreference. The public edition of CorefUD 1.0, which contains 13 datasets for 10 languages, was used as the source of training and evaluation data. The CoNLL score used in previous coreference-oriented shared tasks was used as the main evaluation metric. There were 8 coreference prediction systems submitted by 5 participating teams; in addition, there was a competitive Transformer-based baseline system provided by the organizers at the beginning of the shared task. The winner system outperformed the baseline by 12 percentage points (in terms of the CoNLL scores averaged across all datasets for individual languages).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,375
inproceedings
saputa-2022-coreference
Coreference Resolution for {P}olish: Improvements within the {CRAC} 2022 Shared Task
{\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k and Ogrodniczuk, Maciej
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-mcr.2/
Saputa, Karol
Proceedings of the CRAC 2022 Shared Task on Multilingual Coreference Resolution
18--22
The paper presents our system for coreference resolution in Polish. We compare the system with previous works for the Polish language as well as with the multilingual approach in the CRAC 2022 Shared Task on Multilingual Coreference Resolution thanks to a universal, multilingual data format and evaluation tool. We discuss the accuracy, computational performance, and evaluation approach of the new System which is a faster, end-to-end solution.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,376
inproceedings
prazak-konopik-2022-end
End-to-end Multilingual Coreference Resolution with Mention Head Prediction
{\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k and Ogrodniczuk, Maciej
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-mcr.3/
Pra{\v{z}}{\'a}k, Ond{\v{r}}ej and Konopik, Miloslav
Proceedings of the CRAC 2022 Shared Task on Multilingual Coreference Resolution
23--27
This paper describes our approach to the CRAC 2022 Shared Task on Multilingual Coreference Resolution. Our model is based on a state-of-the-art end-to-end coreference resolution system. Apart from joined multilingual training, we improved our results with mention head prediction. We also tried to integrate dependency information into our model. Our system ended up in third place. Moreover, we reached the best performance on two datasets out of 13.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,377
inproceedings
straka-strakova-2022-ufal
{{\'U}FAL} {C}or{P}ipe at {CRAC} 2022: Effectivity of Multilingual Models for Coreference Resolution
{\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k and Ogrodniczuk, Maciej
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.crac-mcr.4/
Straka, Milan and Strakov{\'a}, Jana
Proceedings of the CRAC 2022 Shared Task on Multilingual Coreference Resolution
28--37
We describe the winning submission to the CRAC 2022 Shared Task on Multilingual Coreference Resolution. Our system first solves mention detection and then coreference linking on the retrieved spans with an antecedent-maximization approach, and both tasks are fine-tuned jointly with shared Transformer weights. We report results of finetuning a wide range of pretrained models. The center of this contribution are fine-tuned multilingual models. We found one large multilingual model with sufficiently large encoder to increase performance on all datasets across the board, with the benefit not limited only to the underrepresented languages or groups of typologically relative languages. The source code is available at \url{https://github.com/ufal/crac2022-corpipe}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,378
inproceedings
sharma-etal-2022-findings
Findings of the {CONSTRAINT} 2022 Shared Task on Detecting the Hero, the Villain, and the Victim in Memes
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.1/
Sharma, Shivam and Suresh, Tharun and Kulkarni, Atharva and Mathur, Himanshi and Nakov, Preslav and Akhtar, Md. Shad and Chakraborty, Tanmoy
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
1--11
We present the findings of the shared task at the CONSTRAINT 2022 Workshop: Hero, Villain, and Victim: Dissecting harmful memes for Semantic role labeling of entities. The task aims to delve deeper into the domain of meme comprehension by deciphering the connotations behind the entities present in a meme. In more nuanced terms, the shared task focuses on determining the victimizing, glorifying, and vilifying intentions embedded in meme entities to explicate their connotations. To this end, we curate HVVMemes, a novel meme dataset of about 7000 memes spanning the domains of COVID-19 and US Politics, each containing entities and their associated roles: hero, villain, victim, or none. The shared task attracted 105 participants, but eventually only 6 submissions were made. Most of the successful submissions relied on fine-tuning pre-trained language and multimodal models along with ensembles. The best submission achieved an F1-score of 58.67.
null
null
10.18653/v1/2022.constraint-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,380
inproceedings
zhou-etal-2022-dd
{DD}-{TIG} at Constraint@{ACL}2022: Multimodal Understanding and Reasoning for Role Labeling of Entities in Hateful Memes
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.2/
Zhou, Ziming and Zhao, Han and Dong, Jingjing and Gao, Jun and Liu, Xiaolong
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
12--18
The memes serve as an important tool in online communication, whereas some hateful memes endanger cyberspace by attacking certain people or subjects. Recent studies address hateful memes detection while further understanding of relationships of entities in memes remains unexplored. This paper presents our work at the Constraint@ACL2022 Shared Task: Hero, Villain and Victim: Dissecting harmful memes for semantic role labelling of entities. In particular, we propose our approach utilizing transformer-based multimodal models through a VCR method with data augmentation, continual pretraining, loss re-weighting, and ensemble learning. We describe the models used, the ways of preprocessing and experiments implementation. As a result, our best model achieves the Macro F1-score of 54.707 on the test set of this shared task.
null
null
10.18653/v1/2022.constraint-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,381
inproceedings
fharook-etal-2022-hero
Are you a hero or a villain? A semantic role labelling approach for detecting harmful memes.
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.3/
Fharook, Shaik and Sufyan Ahmed, Syed and Rithika, Gurram and Budde, Sumith Sai and Saumya, Sunil and Biradar, Shankar
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
19--23
Identifying good and evil through representations of victimhood, heroism, and villainy (i.e., role labeling of entities) has recently caught the research community`s interest. Because of the growing popularity of memes, the amount of offensive information published on the internet is expanding at an alarming rate. It generated a larger need to address this issue and analyze the memes for content moderation. Framing is used to show the entities engaged as heroes, villains, victims, or others so that readers may better anticipate and understand their attitudes and behaviors as characters. Positive phrases are used to characterize heroes, whereas negative terms depict victims and villains, and terms that tend to be neutral are mapped to others. In this paper, we propose two approaches to role label the entities of the meme as hero, villain, victim, or other through Named-Entity Recognition(NER), Sentiment Analysis, etc. With an F1-score of 23.855, our team secured eighth position in the Shared Task @ Constraint 2022.
null
null
10.18653/v1/2022.constraint-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,382
inproceedings
kun-etal-2022-logically
Logically at the Constraint 2022: Multimodal role labelling
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.4/
Kun, Ludovic and Bankoti, Jayesh and Kiskovski, David
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
24--34
This paper describes our system for the Constraint 2022 challenge at ACL 2022, whose goal is to detect which entities are glorified, vilified or victimised, within a meme . The task should be done considering the perspective of the meme`s author. In our work, the challenge is treated as a multi-class classification task. For a given pair of a meme and an entity, we need to classify whether the entity is being referenced as Hero, a Villain, a Victim or Other. Our solution combines (ensembling) different models based on Unimodal (Text only) model and Multimodal model (Text + Images). We conduct several experiments and benchmarks different competitive pre-trained transformers and vision models in this work. Our solution, based on an ensembling method, is ranked first on the leaderboard and obtains a macro F1-score of 0.58 on test set. The code for the experiments and results are available at \url{https://bitbucket.org/logicallydevs/constraint_2022/src/master/}
null
null
10.18653/v1/2022.constraint-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,383
inproceedings
singh-etal-2022-combining
Combining Language Models and Linguistic Information to Label Entities in Memes
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.5/
Singh, Pranaydeep and Maladry, Aaron and Lefever, Els
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
35--42
This paper describes the system we developed for the shared task {\textquoteleft}Hero, Villain and Victim: Dissecting harmful memes for Semantic role labelling of entities' organised in the framework of the Second Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation (Constraint 2022). We present an ensemble approach combining transformer-based models and linguistic information, such as the presence of irony and implicit sentiment associated to the target named entities. The ensemble system obtains promising classification scores, resulting in a third place finish in the competition.
null
null
10.18653/v1/2022.constraint-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,384
inproceedings
nandi-etal-2022-detecting
Detecting the Role of an Entity in Harmful Memes: Techniques and their Limitations
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.6/
Nandi, Rabindra Nath and Alam, Firoj and Nakov, Preslav
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
43--54
Harmful or abusive online content has been increasing over time and it has been raising concerns among social media platforms, government agencies, and policymakers. Such harmful or abusive content has a significant negative impact on society such as cyberbullying led to suicides, COVID-19 related rumors led to hundreds of deaths. The content that is posted and shared online can be textual, visual, a combination of both, or a meme. In this paper, we provide our study on detecting the roles of entities in harmful memes, which is part of the CONSTRAINT-2022 shared task. We report the results on the participated system. We further provide a comparative analysis on different experimental settings (i.e., unimodal, multimodal, attention, and augmentation).
null
null
10.18653/v1/2022.constraint-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,385
inproceedings
montariol-etal-2022-fine
Fine-tuning and Sampling Strategies for Multimodal Role Labeling of Entities under Class Imbalance
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.7/
Montariol, Syrielle and Simon, {\'E}tienne and Riabi, Arij and Seddah, Djam{\'e}
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
55--65
We propose our solution to the multimodal semantic role labeling task from the CONSTRAINT`22 workshop. The task aims at classifying entities in memes into classes such as {\textquotedblleft}hero{\textquotedblright} and {\textquotedblleft}villain{\textquotedblright}. We use several pre-trained multi-modal models to jointly encode the text and image of the memes, and implement three systems to classify the role of the entities. We propose dynamic sampling strategies to tackle the issue of class imbalance. Finally, we perform qualitative analysis on the representations of the entities.
null
null
10.18653/v1/2022.constraint-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,386
inproceedings
sundriyal-etal-2022-document
Document Retrieval and Claim Verification to Mitigate {COVID}-19 Misinformation
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.8/
Sundriyal, Megha and Malhotra, Ganeshan and Akhtar, Md Shad and Sengupta, Shubhashis and Fano, Andrew and Chakraborty, Tanmoy
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
66--74
During the COVID-19 pandemic, the spread of misinformation on online social media has grown exponentially. Unverified bogus claims on these platforms regularly mislead people, leading them to believe in half-baked truths. The current vogue is to employ manual fact-checkers to verify claims to combat this avalanche of misinformation. However, establishing such claims' veracity is becoming increasingly challenging, partly due to the plethora of information available, which is difficult to process manually. Thus, it becomes imperative to verify claims automatically without human interventions. To cope up with this issue, we propose an automated claim verification solution encompassing two steps {--} document retrieval and veracity prediction. For the retrieval module, we employ a hybrid search-based system with BM25 as a base retriever and experiment with recent state-of-the-art transformer-based models for re-ranking. Furthermore, we use a BART-based textual entailment architecture to authenticate the retrieved documents in the later step. We report experimental findings, demonstrating that our retrieval module outperforms the best baseline system by 10.32 NDCG@100 points. We escort a demonstration to assess the efficacy and impact of our suggested solution. As a byproduct of this study, we present an open-source, easily deployable, and user-friendly Python API that the community can adopt.
null
null
10.18653/v1/2022.constraint-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,387
inproceedings
sharif-etal-2022-bad
{M}-{BAD}: A Multilabel Dataset for Detecting Aggressive Texts and Their Targets
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.9/
Sharif, Omar and Hossain, Eftekhar and Hoque, Mohammed Moshiul
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
75--85
Recently, detection and categorization of undesired (e. g., aggressive, abusive, offensive, hate) content from online platforms has grabbed the attention of researchers because of its detrimental impact on society. Several attempts have been made to mitigate the usage and propagation of such content. However, most past studies were conducted primarily for English, where low-resource languages like Bengali remained out of the focus. Therefore, to facilitate research in this arena, this paper introduces a novel multilabel Bengali dataset (named M-BAD) containing 15650 texts to detect aggressive texts and their targets. Each text of M-BAD went through rigorous two-level annotations. At the primary level, each text is labelled as either aggressive or non-aggressive. In the secondary level, the aggressive texts have been further annotated into five fine-grained target classes: religion, politics, verbal, gender and race. Baseline experiments are carried out with different machine learning (ML), deep learning (DL) and transformer models, where Bangla-BERT acquired the highest weighted $f_1$-score in both detection (0.92) and target identification (0.83) tasks. Error analysis of the models exhibits the difficulty to identify context-dependent aggression, and this work argues that further research is required to address these issues.
null
null
10.18653/v1/2022.constraint-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,388
inproceedings
choi-etal-2022-fake
How does fake news use a thumbnail? {CLIP}-based Multimodal Detection on the Unrepresentative News Image
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.10/
Choi, Hyewon and Yoon, Yejun and Yoon, Seunghyun and Park, Kunwoo
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
86--94
This study investigates how fake news use the thumbnail image for a news article. We aim at capturing the degree of semantic incongruity between news text and image by using the pretrained CLIP representation. Motivated by the stylistic distinctiveness in fake news text, we examine whether fake news tends to use an irrelevant image to the news content. Results show that fake news tends to have a high degree of semantic incongruity than general news. We further attempt to detect such image-text incongruity by training classification models on a newly generated dataset. A manual evaluation suggests our method can find news articles of which the thumbnail image is semantically irrelevant to news text with an accuracy of 0.8. We also release a new dataset of image and news text pairs with the incongruity label, facilitating future studies on the direction.
null
null
10.18653/v1/2022.constraint-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,389
inproceedings
lucas-etal-2022-detecting
Detecting False Claims in Low-Resource Regions: A Case Study of {C}aribbean Islands
Chakraborty, Tanmoy and Akhtar, Md. Shad and Shu, Kai and Bernard, H. Russell and Liakata, Maria and Nakov, Preslav and Srivastava, Aseem
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.constraint-1.11/
Lucas, Jason and Cui, Limeng and Le, Thai and Lee, Dongwon
Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations
95--102
The COVID-19 pandemic has created threats to global health control. Misinformation circulated on social media and news outlets has undermined public trust towards Government and health agencies. This problem is further exacerbated in developing countries or low-resource regions, where the news is not equipped with abundant English fact-checking information. In this paper, we make the first attempt to detect COVID-19 misinformation (in English, Spanish, and Haitian French) populated in the Caribbean regions, using the fact-checked claims in the US (in English). We started by collecting a dataset of Caribbean real {\&} fake claims. Then we trained several classification and language models on COVID-19 in the high-resource language regions and transferred the knowledge to the Caribbean claim dataset. The experimental results of this paper reveal the limitations of current fake claim detection in low-resource regions and encourage further research on multi-lingual detection.
null
null
10.18653/v1/2022.constraint-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,390
inproceedings
nishikawa-etal-2022-multilingual
A Multilingual Bag-of-Entities Model for Zero-Shot Cross-Lingual Text Classification
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.1/
Nishikawa, Sosuke and Yamada, Ikuya and Tsuruoka, Yoshimasa and Echizen, Isao
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
1--12
We present a multilingual bag-of-entities model that effectively boosts the performance of zero-shot cross-lingual text classification by extending a multilingual pre-trained language model (e.g., M-BERT). It leverages the multilingual nature of Wikidata: entities in multiple languages representing the same concept are defined with a unique identifier. This enables entities described in multiple languages to be represented using shared embeddings. A model trained on entity features in a resource-rich language can thus be directly applied to other languages. Our experimental results on cross-lingual topic classification (using the MLDoc and TED-CLDC datasets) and entity typing (using the SHINRA2020-ML dataset) show that the proposed model consistently outperforms state-of-the-art models.
null
null
10.18653/v1/2022.conll-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,392
inproceedings
michaelov-bergen-2022-collateral
Collateral facilitation in humans and language models
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.2/
Michaelov, James and Bergen, Benjamin
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
13--26
Are the predictions of humans and language models affected by similar things? Research suggests that while comprehending language, humans make predictions about upcoming words, with more predictable words being processed more easily. However, evidence also shows that humans display a similar processing advantage for highly anomalous words when these words are semantically related to the preceding context or to the most probable continuation. Using stimuli from 3 psycholinguistic experiments, we find that this is also almost always also the case for 8 contemporary transformer language models (BERT, ALBERT, RoBERTa, XLM-R, GPT-2, GPT-Neo, GPT-J, and XGLM). We then discuss the implications of this phenomenon for our understanding of both human language comprehension and the predictions made by language models.
null
null
10.18653/v1/2022.conll-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,393
inproceedings
yoder-etal-2022-hate
How Hate Speech Varies by Target Identity: A Computational Analysis
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.3/
Yoder, Michael and Ng, Lynnette and Brown, David West and Carley, Kathleen
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
27--39
This paper investigates how hate speech varies in systematic ways according to the identities it targets. Across multiple hate speech datasets annotated for targeted identities, we find that classifiers trained on hate speech targeting specific identity groups struggle to generalize to other targeted identities. This provides empirical evidence for differences in hate speech by target identity; we then investigate which patterns structure this variation. We find that the targeted demographic category (e.g. gender/sexuality or race/ethnicity) appears to have a greater effect on the language of hate speech than does the relative social power of the targeted identity group. We also find that words associated with hate speech targeting specific identities often relate to stereotypes, histories of oppression, current social movements, and other social contexts specific to identities. These experiments suggest the importance of considering targeted identity, as well as the social contexts associated with these identities, in automated hate speech classification
null
null
10.18653/v1/2022.conll-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,394
inproceedings
yang-etal-2022-continual
Continual Learning for Natural Language Generations with Transformer Calibration
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.4/
Yang, Peng and Li, Dingcheng and Li, Ping
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
40--49
Conventional natural language process (NLP) generation models are trained offline with a given dataset for a particular task, which is referred to as isolated learning. Research on sequence-to-sequence language generation aims to study continual learning model to constantly learning from sequentially encountered tasks. However, continual learning studies often suffer from catastrophic forgetting, a persistent challenge for lifelong learning. In this paper, we present a novel NLP transformer model that attempts to mitigate catastrophic forgetting in online continual learning from a new perspective, i.e., attention calibration. We model the attention in the transformer as a calibrated unit in a general formulation, where the attention calibration could give benefits to balance the stability and plasticity of continual learning algorithms through influencing both their forward inference path and backward optimization path. Our empirical experiments, paraphrase generation and dialog response generation, demonstrate that this work outperforms state-of-the-art models by a considerable margin and effectively mitigate the forgetting.
null
null
10.18653/v1/2022.conll-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,395
inproceedings
yu-halevy-2022-thats
That`s so cute!: The {CARE} Dataset for Affective Response Detection
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.5/
Yu, Jane and Halevy, Alon
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
50--69
Social media plays an increasing role in our communication with friends and family, and in our consumption of entertainment and information. Hence, to design effective ranking functions for posts on social media, it would be useful to predict the affective responses of a post (e.g., whether it is likely to elicit feelings of entertainment, inspiration, or anger). Similar to work on emotion detection (which focuses on the affect of the publisher of the post), the traditional approach to recognizing affective response would involve an expensive investment in human annotation of training data. We create and publicly release CARE DB, a dataset of 230k social media post annotations according to seven affective responses using the Common Affective Response Expression (CARE) method. The CARE method is a means of leveraging the signal that is present in comments that are posted in response to a post, providing high-precision evidence about the affective response to the post without human annotation. Unlike human annotation, the annotation process we describe here can be iterated upon to expand the coverage of the method, particularly for new affective responses. We present experiments that demonstrate that the CARE annotations compare favorably with crowdsourced annotations. Finally, we use CARE DB to train competitive BERT-based models for predicting affective response as well as emotion detection, demonstrating the utility of the dataset for related tasks.
null
null
10.18653/v1/2022.conll-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,396
inproceedings
wang-etal-2022-fine
A Fine-grained Interpretability Evaluation Benchmark for Neural {NLP}
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.6/
Wang, Lijie and Shen, Yaozong and Peng, Shuyuan and Zhang, Shuai and Xiao, Xinyan and Liu, Hao and Tang, Hongxuan and Chen, Ying and Wu, Hua and Wang, Haifeng
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
70--84
While there is increasing concern about the interpretability of neural models, the evaluation of interpretability remains an open problem, due to the lack of proper evaluation datasets and metrics. In this paper, we present a novel benchmark to evaluate the interpretability of both neural models and saliency methods. This benchmark covers three representative NLP tasks: sentiment analysis, textual similarity and reading comprehension, each provided with both English and Chinese annotated data. In order to precisely evaluate the interpretability, we provide token-level rationales that are carefully annotated to be sufficient, compact and comprehensive. We also design a new metric, i.e., the consistency between the rationales before and after perturbations, to uniformly evaluate the interpretability on different types of tasks. Based on this benchmark, we conduct experiments on three typical models with three saliency methods, and unveil their strengths and weakness in terms of interpretability. We will release this benchmark (\url{https://www.luge.ai/#/luge/task/taskDetail?taskId=15}) and hope it can facilitate the research in building trustworthy systems.
null
null
10.18653/v1/2022.conll-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,397
inproceedings
hopkins-2022-towards
Towards More Natural Artificial Languages
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.7/
Hopkins, Mark
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
85--94
A number of papers have recently argued in favor of using artificially generated languages to investigate the inductive biases of linguistic models, or to develop models for low-resource languages with underrepresented typologies. But the promise of artificial languages comes with a caveat: if these artificial languages are not sufficiently reflective of natural language, then using them as a proxy may lead to inaccurate conclusions. In this paper, we take a step towards increasing the realism of artificial language by introducing a variant of indexed grammars that draw their weights from hierarchical Pitman-Yor processes. We show that this framework generates languages that emulate the statistics of natural language corpora better than the current approach of directly formulating weighted context-free grammars.
null
null
10.18653/v1/2022.conll-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,398
inproceedings
mueller-etal-2022-causal
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.8/
Mueller, Aaron and Xia, Yu and Linzen, Tal
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
95--109
Structural probing work has found evidence for latent syntactic information in pre-trained language models. However, much of this analysis has focused on monolingual models, and analyses of multilingual models have employed correlational methods that are confounded by the choice of probing tasks. In this study, we causally probe multilingual language models (XGLM and multilingual BERT) as well as monolingual BERT-based models across various languages; we do this by performing counterfactual perturbations on neuron activations and observing the effect on models' subject-verb agreement probabilities. We observe where in the model and to what extent syntactic agreement is encoded in each language. We find significant neuron overlap across languages in autoregressive multilingual language models, but not masked language models. We also find two distinct layer-wise effect patterns and two distinct sets of neurons used for syntactic agreement, depending on whether the subject and verb are separated by other tokens. Finally, we find that behavioral analyses of language models are likely underestimating how sensitive masked language models are to syntactic information.
null
null
10.18653/v1/2022.conll-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,399
inproceedings
bafna-etal-2022-combining
Combining Noisy Semantic Signals with Orthographic Cues: Cognate Induction for the {I}ndic Dialect Continuum
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.9/
Bafna, Niyati and van Genabith, Josef and Espa{\~n}a-Bonet, Cristina and {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
110--131
We present a novel method for unsupervised cognate/borrowing identification from monolingual corpora designed for low and extremely low resource scenarios, based on combining noisy semantic signals from joint bilingual spaces with orthographic cues modelling sound change. We apply our method to the North Indian dialect continuum, containing several dozens of dialects and languages spoken by more than 100 million people. Many of these languages are zero-resource and therefore natural language processing for them is non-existent. We first collect monolingual data for 26 Indic languages, 16 of which were previously zero-resource, and perform exploratory character, lexical and subword cross-lingual alignment experiments for the first time at this scale on this dialect continuum. We create bilingual evaluation lexicons against Hindi for 20 of the languages. We then apply our cognate identification method on the data, and show that our method outperforms both traditional orthography baselines as well as EM-style learnt edit distance matrices. To the best of our knowledge, this is the first work to combine traditional orthographic cues with noisy bilingual embeddings to tackle unsupervised cognate detection in a (truly) low-resource setup, showing that even noisy bilingual embeddings can act as good guides for this task. We release our multilingual dialect corpus, called HinDialect, as well as our scripts for evaluation data collection and cognate induction.
null
null
10.18653/v1/2022.conll-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,400
inproceedings
sahoo-etal-2022-detecting
Detecting Unintended Social Bias in Toxic Language Datasets
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.10/
Sahoo, Nihar and Gupta, Himanshu and Bhattacharyya, Pushpak
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
132--143
With the rise of online hate speech, automatic detection of Hate Speech, Offensive texts as a natural language processing task is getting popular. However, very little research has been done to detect unintended social bias from these toxic language datasets. This paper introduces a new dataset ToxicBias curated from the existing dataset of Kaggle competition named {\textquotedblleft}Jigsaw Unintended Bias in Toxicity Classification{\textquotedblright}. We aim to detect social biases, their categories, and targeted groups. The dataset contains instances annotated for five different bias categories, viz., gender, race/ethnicity, religion, political, and LGBTQ. We train transformer-based models using our curated datasets and report baseline performance for bias identification, target generation, and bias implications. Model biases and their mitigation are also discussed in detail. Our study motivates a systematic extraction of social bias data from toxic language datasets.
null
null
10.18653/v1/2022.conll-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,401
inproceedings
davis-2022-incremental
Incremental Processing of Principle {B}: Mismatches Between Neural Models and Humans
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.11/
Davis, Forrest
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
144--156
Despite neural language models qualitatively capturing many human linguistic behaviors, recent work has demonstrated that they underestimate the true processing costs of ungrammatical structures. We extend these more fine-grained comparisons between humans and models by investigating the interaction between Principle B and coreference processing. While humans use Principle B to block certain structural positions from affecting their incremental processing, we find that GPT-based language models are influenced by ungrammatical positions. We conclude by relating the mismatch between neural models and humans to properties of training data and suggest that certain aspects of human processing behavior do not directly follow from linguistic data.
null
null
10.18653/v1/2022.conll-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,402
inproceedings
indurkhya-2022-parsing
Parsing as Deduction Revisited: Using an Automatic Theorem Prover to Solve an {SMT} Model of a Minimalist Parser
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.12/
Indurkhya, Sagar
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
157--175
We introduce a constraint-based parser for Minimalist Grammars (MG), implemented as a working computer program, that falls within the long established {\textquotedblleft}Parsing as Deduction{\textquotedblright} framework. The parser takes as input an MG lexicon and a (partially specified) pairing of sound with meaning - i.e. a word sequence paired with a semantic representation - and, using an axiomatized logic, declaratively deduces syntactic derivations (i.e. parse trees) that comport with the specified interface conditions. The parser is built on the first axiomatization of MGs to use Satisfiability Modulo Theories (SMT), encoding in a constraint-based way the principles of minimalist syntax. The parser operates via a novel solution method: it assembles an SMT model of an MG derivation, translates the inputs into SMT formulae that constrain the model, and then solves the model using the Z3 SMT-solver, a high-performance automatic theorem prover; as the SMT-model has finite size (being bounded by the inputs), it is decidable and thus solvable in finite time. The output derivation is then recovered from the model solution. To demonstrate this, we run the parser on several representative inputs and examine how the output derivations differ when the inputs are partially vs. fully specified. We conclude by discussing the parser`s extensibility and how a linguist can use it to automatically identify: (i) dependencies between input interface conditions and principles of syntax, and (ii) contradictions or redundancies between the model axioms encoding principles of syntax.
null
null
10.18653/v1/2022.conll-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,403
inproceedings
merrill-etal-2022-entailment
Entailment Semantics Can Be Extracted from an Ideal Language Model
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.13/
Merrill, William and Warstadt, Alex and Linzen, Tal
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
176--193
Language models are often trained on text alone, without additional grounding. There is debate as to how much of natural language semantics can be inferred from such a procedure. We prove that entailment judgments between sentences can be extracted from an ideal language model that has perfectly learned its target distribution, assuming the training sentences are generated by Gricean agents, i.e., agents who follow fundamental principles of communication from the linguistic theory of pragmatics. We also show entailment judgments can be decoded from the predictions of a language model trained on such Gricean data. Our results reveal a pathway for understanding the semantic information encoded in unlabeled linguistic data and a potential framework for extracting semantics from language models.
null
null
10.18653/v1/2022.conll-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,404
inproceedings
patel-etal-2022-neurons
On Neurons Invariant to Sentence Structural Changes in Neural Machine Translation
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.14/
Patel, Gal and Choshen, Leshem and Abend, Omri
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
194--212
We present a methodology that explores how sentence structure is reflected in neural representations of machine translation systems. We demonstrate our model-agnostic approach with the Transformer English-German translation model. We analyze neuron-level correlation of activations between paraphrases while discussing the methodology challenges and the need for confound analysis to isolate the effects of shallow cues. We find that similarity between activation patterns can be mostly accounted for by similarity in word choice and sentence length. Following that, we manipulate neuron activations to control the syntactic form of the output. We show this intervention to be somewhat successful, indicating that deep models capture sentence-structure distinctions, despite finding no such indication at the neuron level. To conduct our experiments, we develop a semi-automatic method to generate meaning-preserving minimal pair paraphrases (active-passive voice and adverbial clause-noun phrase) and compile a corpus of such pairs.
null
null
10.18653/v1/2022.conll-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,405
inproceedings
maes-etal-2022-shared
Shared knowledge in natural conversations: can entropy metrics shed light on information transfers?
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.15/
Ma{\"es, Eliot and Blache, Philippe and Becerra, Leonor
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
213--227
The mechanisms underlying human communication have been under investigation for decades, but the answer to how understanding between locutors emerges remains incomplete. Interaction theories suggest the development of a structural alignment between the speakers, allowing for the construction of a shared knowledge base (common ground). In this paper, we propose to apply metrics derived from information theory to quantify the amount of information exchanged between participants, the dynamics of information exchanges, to provide an objective way to measure the common ground instantiation. We focus on a corpus of free conversations augmented with prosodic segmentation and an expert annotation of thematic episodes. We show that during free conversations, the amount of information remains globally constant at the scale of the conversation, but varies depending on the thematic structuring, underlining the role of the speaker introducing the theme. We propose an original methodology applied to uncontrolled material.
null
null
10.18653/v1/2022.conll-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,406
inproceedings
sanchez-bayona-agerri-2022-leveraging
Leveraging a New {S}panish Corpus for Multilingual and Cross-lingual Metaphor Detection
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.16/
Sanchez-Bayona, Elisa and Agerri, Rodrigo
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
228--240
The lack of wide coverage datasets annotated with everyday metaphorical expressions for languages other than English is striking. This means that most research on supervised metaphor detection has been published only for that language. In order to address this issue, this work presents the first corpus annotated with naturally occurring metaphors in Spanish large enough to develop systems to perform metaphor detection. The presented dataset, CoMeta, includes texts from various domains, namely, news, political discourse, Wikipedia and reviews. In order to label CoMeta, we apply the MIPVU method, the guidelines most commonly used to systematically annotate metaphor on real data. We use our newly created dataset to provide competitive baselines by fine-tuning several multilingual and monolingual state-of-the-art large language models. Furthermore, by leveraging the existing VUAM English data in addition to CoMeta, we present the, to the best of our knowledge, first cross-lingual experiments on supervised metaphor detection. Finally, we perform a detailed error analysis that explores the seemingly high transfer of everyday metaphor across these two languages and datasets.
null
null
10.18653/v1/2022.conll-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,407
inproceedings
chamovitz-abend-2022-cognitive
Cognitive Simplification Operations Improve Text Simplification
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.17/
Chamovitz, Eytan and Abend, Omri
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
241--265
Text Simplification (TS) is the task of converting a text into a form that is easier to read while maintaining the meaning of the original text. A sub-task of TS is Cognitive Simplification (CS), converting text to a form that is readily understood by people with cognitive disabilities without rendering it childish or simplistic. This sub-task has yet to be explored with neural methods in NLP, and resources for it are scarcely available. In this paper, we present a method for incorporating knowledge from the cognitive accessibility domain into a TS model, by introducing an inductive bias regarding what simplification operations to use. We show that by adding this inductive bias to a TS-trained model, it is able to adapt better to CS without ever seeing CS data, and outperform a baseline model on a traditional TS benchmark. In addition, we provide a novel test dataset for CS, and analyze the differences between CS corpora and existing TS corpora, in terms of how simplification operations are applied.
null
null
10.18653/v1/2022.conll-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,408
inproceedings
samardzic-etal-2022-language
On Language Spaces, Scales and Cross-Lingual Transfer of {UD} Parsers
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.18/
Samard{\v{zi{\'c, Tanja and Gutierrez-Vasques, Ximena and van der Goot, Rob and M{\"uller-Eberstein, Max and Pelloni, Olga and Plank, Barbara
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
266--281
Cross-lingual transfer of parsing models has been shown to work well for several closely-related languages, but predicting the success in other cases remains hard. Our study is a comprehensive analysis of the impact of linguistic distance on the transfer of UD parsers. As an alternative to syntactic typological distances extracted from URIEL, we propose three text-based feature spaces and show that they can be more precise predictors, especially on a more local scale, when only shorter distances are taken into account. Our analyses also reveal that the good coverage in typological databases is not among the factors that explain good transfer.
null
null
10.18653/v1/2022.conll-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,409
inproceedings
abdelsalam-etal-2022-visual
Visual Semantic Parsing: From Images to {A}bstract {M}eaning {R}epresentation
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.19/
Abdelsalam, Mohamed Ashraf and Shi, Zhan and Fancellu, Federico and Basioti, Kalliopi and Bhatt, Dhaivat and Pavlovic, Vladimir and Fazly, Afsaneh
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
282--300
The success of scene graphs for visual scene understanding has brought attention to the benefits of abstracting a visual input (e.g., image) into a structured representation, where entities (people and objects) are nodes connected by edges specifying their relations. Building these representations, however, requires expensive manual annotation in the form of images paired with their scene graphs or frames. These formalisms remain limited in the nature of entities and relations they can capture. In this paper, we propose to leverage a widely-used meaning representation in the field of natural language processing, the Abstract Meaning Representation (AMR), to address these shortcomings. Compared to scene graphs, which largely emphasize spatial relationships, our visual AMR graphs are more linguistically informed, with a focus on higher-level semantic concepts extrapolated from visual input. Moreover, they allow us to generate meta-AMR graphs to unify information contained in multiple image descriptions under one representation. Through extensive experimentation and analysis, we demonstrate that we can re-purpose an existing text-to-AMR parser to parse images into AMRs. Our findings point to important future research directions for improved scene understanding.
null
null
10.18653/v1/2022.conll-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,410
inproceedings
arehalli-etal-2022-syntactic
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.20/
Arehalli, Suhas and Dillon, Brian and Linzen, Tal
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
301--313
Humans exhibit garden path effects: When reading sentences that are temporarily structurally ambiguous, they slow down when the structure is disambiguated in favor of the less preferred alternative. Surprisal theory (Hale, 2001; Levy, 2008), a prominent explanation of this finding, proposes that these slowdowns are due to the unpredictability of each of the words that occur in these sentences. Challenging this hypothesis, van Schijndel and Linzen (2021) find that estimates of the cost of word predictability derived from language models severely underestimate the magnitude of human garden path effects. In this work, we consider whether this underestimation is due to the fact that humans weight syntactic factors in their predictions more highly than language models do. We propose a method for estimating syntactic predictability from a language model, allowing us to weigh the cost of lexical and syntactic predictability independently. We find that treating syntactic predictability independently from lexical predictability indeed results in larger estimates of garden path. At the same time, even when syntactic predictability is independently weighted, surprisal still greatly underestimate the magnitude of human garden path effects. Our results support the hypothesis that predictability is not the only factor responsible for the processing cost associated with garden path sentences.
null
null
10.18653/v1/2022.conll-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,411
inproceedings
xu-etal-2022-openstance
{O}pen{S}tance: Real-world Zero-shot Stance Detection
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.21/
Xu, Hanzi and Vucetic, Slobodan and Yin, Wenpeng
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
314--324
Prior studies of zero-shot stance detection identify the attitude of texts towards unseen topics occurring in the same document corpus. Such task formulation has three limitations: (i) Single domain/dataset. A system is optimized on a particular dataset from a single domain; therefore, the resulting system cannot work well on other datasets; (ii) the model is evaluated on a limited number of unseen topics; (iii) it is assumed that part of the topics has rich annotations, which might be impossible in real-world applications. These drawbacks will lead to an impractical stance detection system that fails to generalize to open domains and open-form topics. This work defines OpenStance: open-domain zero-shot stance detection, aiming to handle stance detection in an open world with neither domain constraints nor topic-specific annotations. The key challenge of OpenStance lies in open-domain generalization: learning a system with fully unspecific supervision but capable of generalizing to any dataset. To solve OpenStance, we propose to combine indirect supervision, from textual entailment datasets, and weak supervision, from data generated automatically by pre-trained Language Models. Our single system, without any topic-specific supervision, outperforms the supervised method on three popular datasets. To our knowledge, this is the first work that studies stance detection under the open-domain zero-shot setting. All data and code will be publicly released.
null
null
10.18653/v1/2022.conll-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,412
inproceedings
ceron-etal-2022-optimizing
Optimizing text representations to capture (dis)similarity between political parties
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.22/
Ceron, Tanise and Blokker, Nico and Pad{\'o}, Sebastian
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
325--338
Even though fine-tuned neural language models have been pivotal in enabling {\textquotedblleft}deep{\textquotedblright} automatic text analysis, optimizing text representations for specific applications remains a crucial bottleneck. In this study, we look at this problem in the context of a task from computational social science, namely modeling pairwise similarities between political parties. Our research question is what level of structural information is necessary to create robust text representation, contrasting a strongly informed approach (which uses both claim span and claim category annotations) with approaches that forgo one or both types of annotation with document structure-based heuristics. Evaluating our models on the manifestos of German parties for the 2021 federal election. We find that heuristics that maximize within-party over between-party similarity along with a normalization step lead to reliable party similarity prediction, without the need for manual annotation.
null
null
10.18653/v1/2022.conll-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,413
inproceedings
patil-lago-2022-computational
Computational cognitive modeling of predictive sentence processing in a second language
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.23/
Patil, Umesh and Lago, Sol
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
339--349
We propose an ACT-R cue-based retrieval model of the real-time gender predictions displayed by second language (L2) learners. The model extends a previous model of native (L1) speakers according to two central accounts in L2 sentence processing: (i) the Interference Hypothesis, which proposes that retrieval interference is higher in L2 than L1 speakers; (ii) the Lexical Bottleneck Hypothesis, which proposes that problems with gender agreement are due to weak gender representations. We tested the predictions of these accounts using data from two visual world experiments, which found that the gender predictions elicited by German possessive pronouns were delayed and smaller in size in L2 than L1 speakers. The experiments also found a {\textquotedblleft}match effect{\textquotedblright}, such that when the antecedent and possessee of the pronoun had the same gender, predictions were earlier than when the two genders differed. This match effect was smaller in L2 than L1 speakers. The model implementing the Lexical Bottleneck Hypothesis captured the effects of smaller predictions, smaller match effect and delayed predictions in one of the two conditions. By contrast, the model implementing the Interference Hypothesis captured the smaller prediction effect but it showed an earlier prediction effect and an increased match effect in L2 than L1 speakers. These results provide evidence for the Lexical Bottleneck Hypothesis, and they demonstrate a method for extending computational models of L1 to L2 processing.
null
null
10.18653/v1/2022.conll-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,414
inproceedings
nagumothu-etal-2022-pie
{PIE}-{QG}: Paraphrased Information Extraction for Unsupervised Question Generation from Small Corpora
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.24/
Nagumothu, Dinesh and Ofoghi, Bahadorreza and Huang, Guangyan and Eklund, Peter
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
350--359
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of {\ensuremath{<}}subject, predicate, object{\ensuremath{>}} are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
null
null
10.18653/v1/2022.conll-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,415
inproceedings
davis-etal-2022-probing
Probing for targeted syntactic knowledge through grammatical error detection
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.25/
Davis, Christopher and Bryant, Christopher and Caines, Andrew and Rei, Marek and Buttery, Paula
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
360--373
Targeted studies testing knowledge of subject-verb agreement (SVA) indicate that pre-trained language models encode syntactic information. We assert that if models robustly encode subject-verb agreement, they should be able to identify when agreement is correct and when it is incorrect. To that end, we propose grammatical error detection as a diagnostic probe to evaluate token-level contextual representations for their knowledge of SVA. We evaluate contextual representations at each layer from five pre-trained English language models: BERT, XLNet, GPT-2, RoBERTa and ELECTRA. We leverage public annotated training data from both English second language learners and Wikipedia edits, and report results on manually crafted stimuli for subject-verb agreement. We find that masked language models linearly encode information relevant to the detection of SVA errors, while the autoregressive models perform on par with our baseline. However, we also observe a divergence in performance when probes are trained on different training sets, and when they are evaluated on different syntactic constructions, suggesting the information pertaining to SVA error detection is not robustly encoded.
null
null
10.18653/v1/2022.conll-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,416
inproceedings
ocampo-diaz-ouyang-2022-alignment
An Alignment-based Approach to Text Segmentation Similarity Scoring
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.26/
Ocampo Diaz, Gerardo and Ouyang, Jessica
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
374--383
Text segmentation is a natural language processing task with popular applications, such as topic segmentation, element discourse extraction, and sentence tokenization. Much work has been done to develop accurate segmentation similarity metrics, but even the most advanced metrics used today, B, and WindowDiff, exhibit incorrect behavior due to their evaluation of boundaries in isolation. In this paper, we present a new segment-alignment based approach to segmentation similarity scoring and a new similarity metric A. We show that A does not exhibit the erratic behavior of {\$} and WindowDiff, quantify the likelihood of B and WindowDiff misbehaving through simulation, and discuss the versatility of alignment-based approaches for segmentation similarity scoring. We make our implementation of A publicly available and encourage the community to explore more sophisticated approaches to text segmentation similarity scoring.
null
null
10.18653/v1/2022.conll-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,417
inproceedings
choshen-abend-2022-enhancing
Enhancing the Transformer Decoder with Transition-based Syntax
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.27/
Choshen, Leshem and Abend, Omri
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
384--404
Notwithstanding recent advances, syntactic generalization remains a challenge for text decoders. While some studies showed gains from incorporating source-side symbolic syntactic and semantic structure into text generation Transformers, very little work addressed the decoding of such structure. We propose a general approach for tree decoding using a transition-based approach. Examining the challenging test case of incorporating Universal Dependencies syntax into machine translation, we present substantial improvements on test sets that focus on syntactic generalization, while presenting improved or comparable performance on standard MT benchmarks. Further qualitative analysis addresses cases where syntactic generalization in the vanilla Transformer decoder is inadequate and demonstrates the advantages afforded by integrating syntactic information.
null
null
10.18653/v1/2022.conll-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,418
inproceedings
armeni-etal-2022-characterizing
Characterizing Verbatim Short-Term Memory in Neural Language Models
Fokkens, Antske and Srikumar, Vivek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.conll-1.28/
Armeni, Kristijan and Honey, Christopher and Linzen, Tal
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
405--424
When a language model is trained to predict natural language sequences, its prediction at each moment depends on a representation of prior context. What kind of information about the prior context can language models retrieve? We tested whether language models could retrieve the exact words that occurred previously in a text. In our paradigm, language models (transformers and an LSTM) processed English text in which a list of nouns occurred twice. We operationalized retrieval as the reduction in surprisal from the first to the second list. We found that the transformers retrieved both the identity and ordering of nouns from the first list. Further, the transformers' retrieval was markedly enhanced when they were trained on a larger corpus and with greater model depth. Lastly, their ability to index prior tokens was dependent on learned attention patterns. In contrast, the LSTM exhibited less precise retrieval, which was limited to list-initial tokens and to short intervening texts. The LSTM`s retrieval was not sensitive to the order of nouns and it improved when the list was semantically coherent. We conclude that transformers implemented something akin to a working memory system that could flexibly retrieve individual token representations across arbitrary delays; conversely, the LSTM maintained a coarser and more rapidly-decaying semantic gist of prior tokens, weighted toward the earliest items.
null
null
10.18653/v1/2022.conll-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,419
inproceedings
ubaleht-raudalainen-2022-development
Development of the {S}iberian {I}ngrian {F}innish Speech Corpus
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.1/
Ubaleht, Ivan and Raudalainen, Taisto-Kalevi
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
1--4
In this paper we present the speech corpus for the Siberian Ingrian Finnish language. The speech corpus includes audio data, annotations, software tools for data-processing, two databases and a web application. We have published part of the audio data and annotations. The software tool for parsing annotation files and feeding a relational database is developed and published under a free license. A web application is developed and available. At this moment, about 300 words and 200 phrases can be displayed using this web application.
null
null
10.18653/v1/2022.computel-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,421
inproceedings
dyer-2022-new
New syntactic insights for automated {W}olof {U}niversal {D}ependency parsing
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.2/
Dyer, Bill
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
5--12
Focus on language-specific properties with insights from formal minimalist syntax can improve universal dependency (UD) parsing. Such improvements are especially sensitive for low-resource African languages, like Wolof, which have fewer UD treebanks in number and amount of annotations, and fewer contributing annotators. For two different UD parser pipelines, one parser model was trained on the original Wolof treebank, and one was trained on an edited treebank. For each parser pipeline, the accuracy of the edited treebank was higher than the original for both the dependency relations and dependency labels. Accuracy for universal dependency relations improved as much as 2.90{\%}, while accuracy for universal dependency labels increased as much as 3.38{\%}. An annotation scheme that better fits a language`s distinct syntax results in better parsing accuracy.
null
null
10.18653/v1/2022.computel-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,422
inproceedings
siminyu-etal-2022-corpus
Corpus Development of Kiswahili Speech Recognition Test and Evaluation sets, Preemptively Mitigating Demographic Bias Through Collaboration with Linguists
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.3/
Siminyu, Kathleen and Mohamed Amran, Kibibi and Ndegwa Karatu, Abdulrahman and Resani, Mnata and Makobo Junior, Mwimbi and Ryakitimbo, Rebecca and Mwasaru, Britone
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
13--19
Language technologies, particularly speech technologies, are becoming more pervasive for access to digital platforms and resources. This brings to the forefront concerns of their inclusivity, first in terms of language diversity. Additionally, research shows speech recognition to be more accurate for men than for women and more accurate for individuals younger than 30 years of age than those older. In the Global South where languages are low resource, these same issues should be taken into consideration in data collection efforts to not replicate these mistakes. It is also important to note that in varying contexts within the Global South, this work presents additional nuance and potential for bias based on accents, related dialects and variants of a language. This paper documents i) the designing and execution of a Linguists Engagement for purposes of building an inclusive Kiswahili Speech Recognition dataset, representative of the diversity among speakers of the language ii) the unexpected yet key learning in terms of socio-linguistcs which demonstrate the importance of multi-disciplinarity in teams developing datasets and NLP technologies iii) the creation of a test dataset intended to be used for evaluating the performance of Speech Recognition models on demographic groups that are likely to be underrepresented.
null
null
10.18653/v1/2022.computel-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,423
inproceedings
zariquiey-etal-2022-cld2
{CLD}{\texttwosuperior} Language Documentation Meets Natural Language Processing for Revitalising Endangered Languages
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.4/
Zariquiey, Roberto and Oncevay, Arturo and Vera, Javier
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
20--30
Language revitalisation should not be understood as a direct outcome of language documentation, which is mainly focused on the creation of language repositories. Natural language processing (NLP) offers the potential to complement and exploit these repositories through the development of language technologies that may contribute to improving the vitality status of endangered languages. In this paper, we discuss the current state of the interaction between language documentation and computational linguistics, present a diagnosis of how the outputs of recent documentation projects for endangered languages are underutilised for the NLP community, and discuss how the situation could change from both the documentary linguistics and NLP perspectives. All this is introduced as a bridging paradigm dubbed as Computational Language Documentation and Development (CLD{\texttwosuperior}). CLD{\texttwosuperior} calls for (1) the inclusion of NLP-friendly annotated data as a deliverable of future language documentation projects; and (2) the exploitation of language documentation databases by the NLP community to promote the computerization of endangered languages, as one way to contribute to their revitalization.
null
null
10.18653/v1/2022.computel-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,424
inproceedings
samir-silfverberg-2022-one
One Wug, Two Wug+s Transformer Inflection Models Hallucinate Affixes
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.5/
Samir, Farhan and Silfverberg, Miikka
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
31--40
Data augmentation strategies are increasingly important in NLP pipelines for low-resourced and endangered languages, and in neural morphological inflection, augmentation by so called data hallucination is a popular technique. This paper presents a detailed analysis of inflection models trained with and without data hallucination for the low-resourced Canadian Indigenous language Gitksan. Our analysis reveals evidence for a concatenative inductive bias in augmented models{---}in contrast to models trained without hallucination, they strongly prefer affixing inflection patterns over suppletive ones. We find that preference for affixation in general improves inflection performance in {\textquotedblleft}wug test{\textquotedblright} like settings, where the model is asked to inflect lexemes missing from the training set. However, data hallucination dramatically reduces prediction accuracy for reduplicative forms due to a misanalysis of reduplication as affixation. While the overall impact of data hallucination for unseen lexemes remains positive, our findings call for greater qualitative analysis and more varied evaluation conditions in testing automatic inflection systems. Our results indicate that further innovations in data augmentation for computational morphology are desirable.
null
null
10.18653/v1/2022.computel-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,425
inproceedings
san-etal-2022-automated
Automated speech tools for helping communities process restricted-access corpora for language revival efforts
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.6/
San, Nay and Bartelds, Martijn and Ogunremi, Tolulope and Mount, Alison and Thompson, Ruben and Higgins, Michael and Barker, Roy and Simpson, Jane and Jurafsky, Dan
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
41--51
Many archival recordings of speech from endangered languages remain unannotated and inaccessible to community members and language learning programs. One bottleneck is the time-intensive nature of annotation. An even narrower bottleneck occurs for recordings with access constraints, such as language that must be vetted or filtered by authorised community members before annotation can begin. We propose a privacy-preserving workflow to widen both bottlenecks for recordings where speech in the endangered language is intermixed with a more widely-used language such as English for meta-linguistic commentary and questions (e.g.What is the word for {\textquoteleft}tree'?). We integrate voice activity detection (VAD), spoken language identification (SLI), and automatic speech recognition (ASR) to transcribe the metalinguistic content, which an authorised person can quickly scan to triage recordings that can be annotated by people with lower levels of access. We report work-in-progress processing 136 hours archival audio containing a mix of English and Muruwari. Our collaborative work with the Muruwari custodian of the archival materials show that this workflow reduces metalanguage transcription time by 20{\%} even given only minimal amounts of annotated training data, 10 utterances per language for SLI and for ASR at most 39 minutes, and possibly as little as 39 seconds.
null
null
10.18653/v1/2022.computel-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,426
inproceedings
pine-etal-2022-gi22pi
{G}$_i$2{P}$_i$ Rule-based, index-preserving grapheme-to-phoneme transformations
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.7/
Pine, Aidan and William Littell, Patrick and Joanis, Eric and Huggins-Daines, David and Cox, Christopher and Davis, Fineen and Antonio Santos, Eddie and Srikanth, Shankhalika and Torkornoo, Delasie and Yu, Sabrina
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
52--60
This paper describes the motivation and implementation details for a rule-based, index-preserving grapheme-to-phoneme engine {\textquoteleft}G$_i$2P$_i$' implemented in pure Python and released under the open source MIT license. The engine and interface have been designed to prioritize the developer experience of potential contributors without requiring a high level of programming knowledge. {\textquoteleft}G$_i$2P$_i$' already provides mappings for 30 (mostly Indigenous) languages, and the package is accompanied by a web-based interactive development environment, a RESTful API, and extensive documentation to encourage the addition of more mappings in the future. We also present three downstream applications of {\textquoteleft}G$_i$2P$_i$' and show results of a preliminary evaluation.
null
null
10.18653/v1/2022.computel-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,427
inproceedings
zhang-etal-2022-shallow
Shallow Parsing for {N}epal {B}hasa Complement Clauses
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.8/
Zhang, Borui and Kazemzadeh, Abe and Reese, Brian
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
61--67
Accelerating the process of data collection, annotation, and analysis is an urgent need for linguistic fieldwork and documentation of endangered languages (Bird, 2009). Our experiments describe how we maximize the quality for the Nepal Bhasa syntactic complement structure chunking model. Native speaker language consultants were trained to annotate a minimally selected raw data set (Su{\'a}rez et al.,2019). The embedded clauses, matrix verbs, and embedded verbs are annotated. We apply both statistical training algorithms and transfer learning in our training, including Naive Bayes, MaxEnt, and fine-tuning the pre-trained mBERT model (Devlin et al., 2018). We show that with limited annotated data, the model is already sufficient for the task. The modeling resources we used are largely available for many other endangered languages. The practice is easy to duplicate for training a shallow parser for other endangered languages in general.
null
null
10.18653/v1/2022.computel-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,428
inproceedings
bedi-etal-2022-using
Using {LARA} to create image-based and phonetically annotated multimodal texts for endangered languages
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.9/
B{\'e}di, Branislav and Beedar, Hakeem and Chiera, Belinda and Ivanova, Nedelina and Maizonniaux, Christ{\`e}le and N{\'i} Chiar{\'a}in, Neasa and Rayner, Manny and Sloan, John and Zuckermann, Ghil{'}ad
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
68--77
We describe recent extensions to the open source Learning And Reading Assistant (LARA) supporting image-based and phonetically annotated texts. We motivate the utility of these extensions both in general and specifically in relation to endangered and archaic languages, and illustrate with examples from the revived Australian language Barngarla, Icelandic Sign Language, Irish Gaelic, Old Norse manuscripts and Egyptian hieroglyphics.
null
null
10.18653/v1/2022.computel-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,429
inproceedings
stefanovitch-2022-recovering
Recovering Text from Endangered Languages Corrupted {PDF} documents
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.10/
Stefanovitch, Nicolas
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
78--82
In this paper we present an approach to efficiently recover texts from corrupted documents of endangered languages. Textual resources for such languages are scarce, and sometimes the few available resources are corrupted PDF documents. Endangered languages are not supported by standard tools and present even the additional difficulties of not possessing any corpus over which to train language models to assist with the recovery. The approach presented is able to fully recover born digital PDF documents with minimal effort, thereby helping the preservation effort of endangered languages, by extending the range of documents usable for corpus building.
null
null
10.18653/v1/2022.computel-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,430
inproceedings
bettinson-bird-2022-learning
Learning Through Transcription
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.11/
Bettinson, Mat and Bird, Steven
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
83--92
Transcribing speech for primarily oral, local languages is often a joint effort involving speakers and outsiders. It is commonly motivated by externally-defined scientific goals, alongside local motivations such as language acquisition and access to heritage materials. We explore the task of {\textquoteleft}learning through transcription' through the design of a system for collaborative speech annotation. We have developed a prototype to support local and remote learner-speaker interactions in remote Aboriginal communities in northern Australia. We show that situated systems design for inclusive non-expert practice is a promising new direction for working with speakers of local languages.
null
null
10.18653/v1/2022.computel-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,431
inproceedings
finn-etal-2022-developing
Developing a Part-Of-Speech tagger for te reo {M}{\={a}}ori
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.12/
Finn, Aoife and Jones, Peter-Lucas and Mahelona, Keoni and Duncan, Suzanne and Leoni, Gianna
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
93--98
This paper discusses the development of a Part-of-Speech tagger for te reo M{\={a}}ori which is the Indigenous language of Aotearoa, also known as New Zealand, see Morrison. Henceforth, Part-of-Speech will be referred to as POS throughout this paper and te reo M{\={a}}ori will be referred to as M{\={a}}ori, while Universal Dependencies will be referred to as UD. Prior to the development of this tagger, there was no POS tagger for M{\={a}}ori from Aotearoa. POS taggers tag words according to their syntactic or grammatical category. However, many traditional syntactic categories, and by consequence POS labels, do not {\textquotedblleft}work for{\textquotedblright} M{\={a}}ori. By this we mean that, for some of the traditional categories, The definition of, or guidelines for, an existing category is not suitable for M{\={a}}ori. They do not have an existing category for certain word classes of M{\={a}}ori. They do not reflect a M{\={a}}ori worldview of the M{\={a}}ori language. We wanted a tagset that is usable with industry-wide tools, but we also needed a tagset that would meet the needs of M{\={a}}ori. Therefore, we based our tagset and guidelines on the UD tagset and tagging conventions, however the categorization of words has been significantly altered to be appropriate for M{\={a}}ori. This is because at the time of development of our POS tagger, the UD conventions had still not been used to tag a Polyneisan language such as M{\={a}}ori, nor did it provide any guidelines about how to tag them. To that end, we worked with highly-proficient, specially-selected M{\={a}}ori speakers and linguists who are specialists in M{\={a}}ori. This has ensured that our POS labels and guidelines conventions faithfully reflect a M{\={a}}ori speaker`s conceptualization of their language.
null
null
10.18653/v1/2022.computel-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,432
inproceedings
cadotte-etal-2022-challenges
Challenges and Perspectives for Innu-Aimun within Indigenous Language Technologies
Moeller, Sarah and Anastasopoulos, Antonios and Arppe, Antti and Chaudhary, Aditi and Harrigan, Atticus and Holden, Josh and Lachler, Jordan and Palmer, Alexis and Rijhwani, Shruti and Schwartz, Lane
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.computel-1.13/
Cadotte, Antoine and Le Ngoc, Tan and Boivin, Mathieu and Sadat, Fatiha
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
99--108
Innu-Aimun is an Algonquian language spoken in Eastern Canada. It is the language of the Innu, an Indigenous people that now lives for the most part in a dozen communities across Quebec and Labrador. Although it is alive, Innu-Aimun sees important preservation and revitalization challenges and issues. The state of its technology is still nascent, with very few existing applications. This paper proposes a first survey of the available linguistic resources and existing technology for Innu-Aimun. Considering the existing linguistic and textual resources, we argue that developing language technology is feasible and propose first steps towards NLP applications like machine translation. The goal of developing such technologies is first and foremost to help efforts in improving language transmission and cultural safety and preservation for Innu-Aimun speakers, as those are considered urgent and vital issues. Finally, we discuss the importance of close collaboration and consultation with the Innu community in order to ensure that language technologies are developed respectfully and in accordance with that goal.
null
null
10.18653/v1/2022.computel-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,433