entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
kodali-etal-2022-symcom
{S}y{MC}o{M} - Syntactic Measure of Code Mixing A Study Of {E}nglish-{H}indi Code-Mixing
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.40/
Kodali, Prashant and Goel, Anmol and Choudhury, Monojit and Shrivastava, Manish and Kumaraguru, Ponnurangam
Findings of the Association for Computational Linguistics: ACL 2022
472--480
Code mixing is the linguistic phenomenon where bilingual speakers tend to switch between two or more languages in conversations. Recent work on code-mixing in computational settings has leveraged social media code mixed texts to train NLP models. For capturing the variety of code mixing in, and across corpus, Language ID (LID) tags based measures (CMI) have been proposed. Syntactical variety/patterns of code-mixing and their relationship vis-a-vis computational model`s performance is under explored. In this work, we investigate a collection of English(en)-Hindi(hi) code-mixed datasets from a syntactic lens to propose, $SyMCoM$, an indicator of syntactic variety in code-mixed text, with intuitive theoretical bounds. We train SoTA en-hi PoS tagger, accuracy of 93.4{\%}, to reliably compute PoS tags on a corpus, and demonstrate the utility of $SyMCoM$ by applying it on various syntactical categories on a collection of datasets, and compare datasets using the measure.
null
null
10.18653/v1/2022.findings-acl.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,965
inproceedings
nakamura-etal-2022-hybridialogue
{H}ybri{D}ialogue: An Information-Seeking Dialogue Dataset Grounded on Tabular and Textual Data
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.41/
Nakamura, Kai and Levy, Sharon and Tuan, Yi-Lin and Chen, Wenhu and Wang, William Yang
Findings of the Association for Computational Linguistics: ACL 2022
481--492
A pressing challenge in current dialogue systems is to successfully converse with users on topics with information distributed across different modalities. Previous work in multiturn dialogue systems has primarily focused on either text or table information. In more realistic scenarios, having a joint understanding of both is critical as knowledge is typically distributed over both unstructured and structured forms. We present a new dialogue dataset, HybriDialogue, which consists of crowdsourced natural conversations grounded on both Wikipedia text and tables. The conversations are created through the decomposition of complex multihop questions into simple, realistic multiturn dialogue interactions. We propose retrieval, system state tracking, and dialogue response generation tasks for our dataset and conduct baseline experiments for each. Our results show that there is still ample opportunity for improvement, demonstrating the importance of building stronger dialogue systems that can reason over the complex setting of informationseeking dialogue grounded on tables and text.
null
null
10.18653/v1/2022.findings-acl.41
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,966
inproceedings
bahrainian-etal-2022-newts
{NEWTS}: A Corpus for News Topic-Focused Summarization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.42/
Bahrainian, Seyed Ali and Feucht, Sheridan and Eickhoff, Carsten
Findings of the Association for Computational Linguistics: ACL 2022
493--503
Text summarization models are approaching human levels of fidelity. Existing benchmarking corpora provide concordant pairs of full and abridged versions of Web, news or professional content. To date, all summarization datasets operate under a one-size-fits-all paradigm that may not reflect the full range of organic summarization needs. Several recently proposed models (e.g., plug and play language models) have the capacity to condition the generated summaries on a desired range of themes. These capacities remain largely unused and unevaluated as there is no dedicated dataset that would support the task of topic-focused summarization. This paper introduces the first topical summarization corpus NEWTS, based on the well-known CNN/Dailymail dataset, and annotated via online crowd-sourcing. Each source article is paired with two reference summaries, each focusing on a different theme of the source document. We evaluate a representative range of existing techniques and analyze the effectiveness of different prompting methods.
null
null
10.18653/v1/2022.findings-acl.42
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,967
inproceedings
alkiek-etal-2022-classification
Classification without (Proper) Representation: Political Heterogeneity in Social Media and Its Implications for Classification and Behavioral Analysis
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.43/
Alkiek, Kenan and Zhang, Bohan and Jurgens, David
Findings of the Association for Computational Linguistics: ACL 2022
504--522
Reddit is home to a broad spectrum of political activity, and users signal their political affiliations in multiple ways{---}from self-declarations to community participation. Frequently, computational studies have treated political users as a single bloc, both in developing models to infer political leaning and in studying political behavior. Here, we test this assumption of political users and show that commonly-used political-inference models do not generalize, indicating heterogeneous types of political users. The models remain imprecise at best for most users, regardless of which sources of data or methods are used. Across a 14-year longitudinal analysis, we demonstrate that the choice in definition of a political user has significant implications for behavioral analysis. Controlling for multiple factors, political users are more toxic on the platform and inter-party interactions are even more toxic{---}but not all political users behave this way. Last, we identify a subset of political users who repeatedly flip affiliations, showing that these users are the most controversial of all, acting as provocateurs by more frequently bringing up politics, and are more likely to be banned, suspended, or deleted.
null
null
10.18653/v1/2022.findings-acl.43
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,968
inproceedings
lignos-etal-2022-toward
Toward More Meaningful Resources for Lower-resourced Languages
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.44/
Lignos, Constantine and Holley, Nolan and Palen-Michel, Chester and S{\"alev{\"a, Jonne
Findings of the Association for Computational Linguistics: ACL 2022
523--532
In this position paper, we describe our perspective on how meaningful resources for lower-resourced languages should be developed in connection with the speakers of those languages. Before advancing that position, we first examine two massively multilingual resources used in language technology development, identifying shortcomings that limit their usefulness. We explore the contents of the names stored in Wikidata for a few lower-resourced languages and find that many of them are not in fact in the languages they claim to be, requiring non-trivial effort to correct. We discuss quality issues present in WikiAnn and evaluate whether it is a useful supplement to hand-annotated data. We then discuss the importance of creating annotations for lower-resourced languages in a thoughtful and ethical way that includes the language speakers as part of the development process. We conclude with recommended guidelines for resource development.
null
null
10.18653/v1/2022.findings-acl.44
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,969
inproceedings
kocyigit-etal-2022-better
Better Quality Estimation for Low Resource Corpus Mining
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.45/
Kocyigit, Muhammed and Lee, Jiho and Wijaya, Derry
Findings of the Association for Computational Linguistics: ACL 2022
533--543
Quality Estimation (QE) models have the potential to change how we evaluate and maybe even train machine translation models. However, these models still lack the robustness to achieve general adoption. We show that Stateof-the-art QE models, when tested in a Parallel Corpus Mining (PCM) setting, perform unexpectedly bad due to a lack of robustness to out-of-domain examples. We propose a combination of multitask training, data augmentation and contrastive learning to achieve better and more robust QE performance. We show that our method improves QE performance significantly in the MLQE challenge and the robustness of QE models when tested in the Parallel Corpus Mining setup. We increase the accuracy in PCM by more than 0.80, making it on par with state-of-the-art PCM methods that use millions of sentence pairs to train their models. In comparison, we use a thousand times less data, 7K parallel sentences in total, and propose a novel low resource PCM method.
null
null
10.18653/v1/2022.findings-acl.45
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,970
inproceedings
liu-etal-2022-end
End-to-End Segmentation-based News Summarization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.46/
Liu, Yang and Zhu, Chenguang and Zeng, Michael
Findings of the Association for Computational Linguistics: ACL 2022
544--554
In this paper, we bring a new way of digesting news content by introducing the task of segmenting a news article into multiple sections and generating the corresponding summary to each section. We make two contributions towards this new task. First, we create and make available a dataset, SegNews, consisting of 27k news articles with sections and aligned heading-style section summaries. Second, we propose a novel segmentation-based language generation model adapted from pre-trained language models that can jointly segment a document and produce the summary for each section. Experimental results on SegNews demonstrate that our model can outperform several state-of-the-art sequence-to-sequence generation models for this new task.
null
null
10.18653/v1/2022.findings-acl.46
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,971
inproceedings
meng-etal-2022-fast
Fast Nearest Neighbor Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.47/
Meng, Yuxian and Li, Xiaoya and Zheng, Xiayu and Wu, Fei and Sun, Xiaofei and Zhang, Tianwei and Li, Jiwei
Findings of the Association for Computational Linguistics: ACL 2022
555--565
Though nearest neighbor Machine Translation ($k$NN-MT) (CITATION) has proved to introduce significant performance boosts over standard neural MT systems, it is prohibitively slow since it uses the entire reference corpus as the datastore for the nearest neighbor search. This means each step for each beam in the beam search has to search over the entire reference corpus. $k$NN-MT is thus two-orders slower than vanilla MT models, making it hard to be applied to real-world applications, especially online services. In this work, we propose Fast $k$NN-MT to address this issue. Fast $k$NN-MT constructs a significantly smaller datastore for the nearest neighbor search: for each word in a source sentence, Fast $k$NN-MT first selects its nearest token-level neighbors, which is limited to tokens that are the same as the query token. Then at each decoding step, in contrast to using the entire corpus as the datastore, the search space is limited to target tokens corresponding to the previously selected reference source tokens. This strategy avoids search through the whole datastore for nearest neighbors and drastically improves decoding efficiency. Without loss of performance, Fast $k$NN-MT is two-orders faster than $k$NN-MT, and is only two times slower than the standard NMT model. Fast $k$NN-MT enables the practical use of $k$NN-MT systems in real-world MT applications. The code is available at \url{https://github.com/ShannonAI/fast-knn-nmt}.
null
null
10.18653/v1/2022.findings-acl.47
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,972
inproceedings
subramani-etal-2022-extracting
Extracting Latent Steering Vectors from Pretrained Language Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.48/
Subramani, Nishant and Suresh, Nivedita and Peters, Matthew
Findings of the Association for Computational Linguistics: ACL 2022
566--581
Prior work on controllable text generation has focused on learning how to control language models through trainable decoding, smart-prompt design, or fine-tuning based on a desired objective. We hypothesize that the information needed to steer the model to generate a target sentence is already encoded within the model. Accordingly, we explore a different approach altogether: extracting latent vectors directly from pretrained language model decoders without fine-tuning. Experiments show that there exist steering vectors, which, when added to the hidden states of the language model, generate a target sentence nearly perfectly ({\ensuremath{>}} 99 BLEU) for English sentences from a variety of domains. We show that vector arithmetic can be used for unsupervised sentiment transfer on the Yelp sentiment benchmark, with performance comparable to models tailored to this task. We find that distances between steering vectors reflect sentence similarity when evaluated on a textual similarity benchmark (STS-B), outperforming pooled hidden states of models. Finally, we present an analysis of the intrinsic properties of the steering vectors. Taken together, our results suggest that frozen LMs can be effectively controlled through their latent steering space.
null
null
10.18653/v1/2022.findings-acl.48
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,973
inproceedings
vu-etal-2022-domain
Domain Generalisation of {NMT}: Fusing Adapters with Leave-One-Domain-Out Training
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.49/
Vu, Thuy-Trang and Khadivi, Shahram and Phung, Dinh and Haffari, Gholamreza
Findings of the Association for Computational Linguistics: ACL 2022
582--588
Generalising to unseen domains is under-explored and remains a challenge in neural machine translation. Inspired by recent research in parameter-efficient transfer learning from pretrained models, this paper proposes a fusion-based generalisation method that learns to combine domain-specific parameters. We propose a leave-one-domain-out training strategy to avoid information leaking to address the challenge of not knowing the test domain during training time. Empirical results on three language pairs show that our proposed fusion method outperforms other baselines up to +0.8 BLEU score on average.
null
null
10.18653/v1/2022.findings-acl.49
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,974
inproceedings
mishra-etal-2022-reframing
Reframing Instructional Prompts to {GPT}k`s Language
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.50/
Mishra, Swaroop and Khashabi, Daniel and Baral, Chitta and Choi, Yejin and Hajishirzi, Hannaneh
Findings of the Association for Computational Linguistics: ACL 2022
589--612
What kinds of instructional prompts are easier to follow for Language Models (LMs)? We study this question by conducting extensive empirical analysis that shed light on important features of successful instructional prompts. Specifically, we study several classes of reframing techniques for manual reformulation of prompts into more effective ones. Some examples include decomposing a complex task instruction into multiple simpler tasks or itemizing instructions into sequential steps. Our experiments compare the zero-shot and few-shot performance of LMs prompted with reframed instructions on 12 NLP tasks across 6 categories. Compared with original instructions, our reframed instructions lead to significant improvements across LMs with different sizes. For example, the same reframed prompts boost few-shot performance of GPT3-series and GPT2-series by 12.5{\%} and 6.7{\%} respectively averaged over all tasks. Furthermore, reframed instructions reduce the number of examples required to prompt LMs in the few-shot setting. We hope these empirically-driven techniques will pave the way towards more effective future prompting algorithms.
null
null
10.18653/v1/2022.findings-acl.50
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,975
inproceedings
zhao-etal-2022-read
Read Top News First: A Document Reordering Approach for Multi-Document News Summarization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.51/
Zhao, Chao and Huang, Tenghao and Basu Roy Chowdhury, Somnath and Chandrasekaran, Muthu Kumar and McKeown, Kathleen and Chaturvedi, Snigdha
Findings of the Association for Computational Linguistics: ACL 2022
613--621
A common method for extractive multi-document news summarization is to re-formulate it as a single-document summarization problem by concatenating all documents as a single meta-document. However, this method neglects the relative importance of documents. We propose a simple approach to reorder the documents according to their relative importance before concatenating and summarizing them. The reordering makes the salient content easier to learn by the summarization model. Experiments show that our approach outperforms previous state-of-the-art methods with more complex architectures.
null
null
10.18653/v1/2022.findings-acl.51
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,976
inproceedings
soni-etal-2022-human
Human Language Modeling
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.52/
Soni, Nikita and Matero, Matthew and Balasubramanian, Niranjan and Schwartz, H. Andrew
Findings of the Association for Computational Linguistics: ACL 2022
622--636
Natural language is generated by people, yet traditional language modeling views words or documents as if generated independently. Here, we propose human language modeling (HuLM), a hierarchical extension to the language modeling problem where by a human- level exists to connect sequences of documents (e.g. social media messages) and capture the notion that human language is moderated by changing human states. We introduce, HaRT, a large-scale transformer model for solving HuLM, pre-trained on approximately 100,000 social media users, and demonstrate it`s effectiveness in terms of both language modeling (perplexity) for social media and fine-tuning for 4 downstream tasks spanning document- and user-levels. Results on all tasks meet or surpass the current state-of-the-art.
null
null
10.18653/v1/2022.findings-acl.52
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,977
inproceedings
hou-etal-2022-inverse
Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.53/
Hou, Yutai and Chen, Cheng and Luo, Xianzhen and Li, Bohan and Che, Wanxiang
Findings of the Association for Computational Linguistics: ACL 2022
637--647
Prompting methods recently achieve impressive success in few-shot learning. These methods modify input samples with prompt sentence pieces, and decode label tokens to map samples to corresponding labels. However, such a paradigm is very inefficient for the task of slot tagging. Since slot tagging samples are multiple consecutive words in a sentence, the prompting methods have to enumerate all n-grams token spans to find all the possible slots, which greatly slows down the prediction. To tackle this, we introduce an inverse paradigm for prompting. Different from the classic prompts mapping tokens to labels, we reversely predict slot values given slot types. Such inverse prompting only requires a one-turn prediction for each slot type and greatly speeds up the prediction. Besides, we propose a novel Iterative Prediction Strategy, from which the model learns to refine predictions by considering the relations between different slot types. We find, somewhat surprisingly, the proposed method not only predicts faster but also significantly improves the effect (improve over 6.1 F1-scores on 10-shot setting) and achieves new state-of-the-art performance.
null
null
10.18653/v1/2022.findings-acl.53
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,978
inproceedings
zou-etal-2022-cross
Cross-Modal Cloze Task: A New Task to Brain-to-Word Decoding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.54/
Zou, Shuxian and Wang, Shaonan and Zhang, Jiajun and Zong, Chengqing
Findings of the Association for Computational Linguistics: ACL 2022
648--657
Decoding language from non-invasive brain activity has attracted increasing attention from both researchers in neuroscience and natural language processing. Due to the noisy nature of brain recordings, existing work has simplified brain-to-word decoding as a binary classification task which is to discriminate a brain signal between its corresponding word and a wrong one. This pairwise classification task, however, cannot promote the development of practical neural decoders for two reasons. First, it has to enumerate all pairwise combinations in the test set, so it is inefficient to predict a word in a large vocabulary. Second, a perfect pairwise decoder cannot guarantee the performance on direct classification. To overcome these and go a step further to a realistic neural decoder, we propose a novel Cross-Modal Cloze (CMC) task which is to predict the target word encoded in the neural image with a context as prompt. Furthermore, to address this task, we propose a general approach that leverages the pre-trained language model to predict the target word. To validate our method, we perform experiments on more than 20 participants from two brain imaging datasets. Our method achieves 28.91{\%} top-1 accuracy and 54.19{\%} top-5 accuracy on average across all participants, significantly outperforming several baselines. This result indicates that our model can serve as a state-of-the-art baseline for the CMC task. More importantly, it demonstrates that it is feasible to decode a certain word within a large vocabulary from its neural brain activity.
null
null
10.18653/v1/2022.findings-acl.54
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,979
inproceedings
gupta-etal-2022-mitigating
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.55/
Gupta, Umang and Dhamala, Jwala and Kumar, Varun and Verma, Apurv and Pruksachatkun, Yada and Krishna, Satyapriya and Gupta, Rahul and Chang, Kai-Wei and Ver Steeg, Greg and Galstyan, Aram
Findings of the Association for Computational Linguistics: ACL 2022
658--678
Language models excel at generating coherent text, and model compression techniques such as knowledge distillation have enabled their use in resource-constrained settings. However, these models can be biased in multiple ways, including the unfounded association of male and female genders with gender-neutral professions. Therefore, knowledge distillation without any fairness constraints may preserve or exaggerate the teacher model`s biases onto the distilled model. To this end, we present a novel approach to mitigate gender disparity in text generation by learning a fair model during knowledge distillation. We propose two modifications to the base knowledge distillation based on counterfactual role reversal{---}modifying teacher probabilities and augmenting the training set. We evaluate gender polarity across professions in open-ended text generated from the resulting distilled and finetuned GPT{--}2 models and demonstrate a substantial reduction in gender disparity with only a minor compromise in utility. Finally, we observe that language models that reduce gender polarity in language generation do not improve embedding fairness or downstream classification fairness.
null
null
10.18653/v1/2022.findings-acl.55
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,980
inproceedings
akash-etal-2022-domain
Domain Representative Keywords Selection: A Probabilistic Approach
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.56/
Akash, Pritom Saha and Huang, Jie and Chang, Kevin and Li, Yunyao and Popa, Lucian and Zhai, ChengXiang
Findings of the Association for Computational Linguistics: ACL 2022
679--692
We propose a probabilistic approach to select a subset of a \textit{target domain representative keywords} from a candidate set, contrasting with a context domain. Such a task is crucial for many downstream tasks in natural language processing. To contrast the target domain and the context domain, we adapt the \textit{two-component mixture model} concept to generate a distribution of candidate keywords. It provides more importance to the \textit{distinctive} keywords of the target domain than common keywords contrasting with the context domain. To support the \textit{representativeness} of the selected keywords towards the target domain, we introduce an \textit{optimization algorithm} for selecting the subset from the generated candidate distribution. We have shown that the optimization algorithm can be efficiently implemented with a near-optimal approximation guarantee. Finally, extensive experiments on multiple domains demonstrate the superiority of our approach over other baselines for the tasks of keyword summary generation and trending keywords selection.
null
null
10.18653/v1/2022.findings-acl.56
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,981
inproceedings
feng-etal-2022-hierarchical
Hierarchical Inductive Transfer for Continual Dialogue Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.57/
Feng, Shaoxiong and Ren, Xuancheng and Li, Kan and Sun, Xu
Findings of the Association for Computational Linguistics: ACL 2022
693--699
Pre-trained models have achieved excellent performance on the dialogue task. However, for the continual increase of online chit-chat scenarios, directly fine-tuning these models for each of the new tasks not only explodes the capacity of the dialogue system on the embedded devices but also causes knowledge forgetting on pre-trained models and knowledge interference among diverse dialogue tasks. In this work, we propose a hierarchical inductive transfer framework to learn and deploy the dialogue skills continually and efficiently. First, we introduce the adapter module into pre-trained models for learning new dialogue tasks. As the only trainable module, it is beneficial for the dialogue system on the embedded devices to acquire new dialogue skills with negligible additional parameters. Then, for alleviating knowledge interference between tasks yet benefiting the regularization between them, we further design hierarchical inductive transfer that enables new tasks to use general knowledge in the base adapter without being misled by diverse knowledge in task-specific adapters. Empirical evaluation and analysis indicate that our framework obtains comparable performance under deployment-friendly model capacity.
null
null
10.18653/v1/2022.findings-acl.57
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,982
inproceedings
arora-etal-2022-exposure
Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.58/
Arora, Kushal and El Asri, Layla and Bahuleyan, Hareesh and Cheung, Jackie
Findings of the Association for Computational Linguistics: ACL 2022
700--710
Current language generation models suffer from issues such as repetition, incoherence, and hallucinations. An often-repeated hypothesis for this brittleness of generation models is that it is caused by the training and the generation procedure mismatch, also referred to as exposure bias. In this paper, we verify this hypothesis by analyzing exposure bias from an imitation learning perspective. We show that exposure bias leads to an accumulation of errors during generation, analyze why perplexity fails to capture this accumulation of errors, and empirically show that this accumulation results in poor generation quality.
null
null
10.18653/v1/2022.findings-acl.58
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,983
inproceedings
jia-etal-2022-question
Question Answering Infused Pre-training of General-Purpose Contextualized Representations
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.59/
Jia, Robin and Lewis, Mike and Zettlemoyer, Luke
Findings of the Association for Computational Linguistics: ACL 2022
711--728
We propose a pre-training objective based on question answering (QA) for learning general-purpose contextual representations, motivated by the intuition that the representation of a phrase in a passage should encode all questions that the phrase can answer in context. To this end, we train a bi-encoder QA model, which independently encodes passages and questions, to match the predictions of a more accurate cross-encoder model on 80 million synthesized QA pairs. By encoding QA-relevant information, the bi-encoder`s token-level representations are useful for non-QA downstream tasks without extensive (or in some cases, any) fine-tuning. We show large improvements over both RoBERTa-large and previous state-of-the-art results on zero-shot and few-shot paraphrase detection on four datasets, few-shot named entity recognition on two datasets, and zero-shot sentiment analysis on three datasets.
null
null
10.18653/v1/2022.findings-acl.59
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,984
inproceedings
guo-etal-2022-automatic
Automatic Song Translation for Tonal Languages
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.60/
Guo, Fenfei and Zhang, Chen and Zhang, Zhirui and He, Qixin and Zhang, Kejun and Xie, Jun and Boyd-Graber, Jordan
Findings of the Association for Computational Linguistics: ACL 2022
729--743
This paper develops automatic song translation (AST) for tonal languages and addresses the unique challenge of aligning words' tones with melody of a song in addition to conveying the original meaning. We propose three criteria for effective AST{---}preserving meaning, singability and intelligibility{---}and design metrics for these criteria. We develop a new benchmark for English{--}Mandarin song translation and develop an unsupervised AST system, Guided AliGnment for Automatic Song Translation (GagaST), which combines pre-training with three decoding constraints. Both automatic and human evaluations show GagaST successfully balances semantics and singability.
null
null
10.18653/v1/2022.findings-acl.60
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,985
inproceedings
su-etal-2022-read
Read before Generate! Faithful Long Form Question Answering with Machine Reading
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.61/
Su, Dan and Li, Xiaoguang and Zhang, Jindi and Shang, Lifeng and Jiang, Xin and Liu, Qun and Fung, Pascale
Findings of the Association for Computational Linguistics: ACL 2022
744--756
Long-form question answering (LFQA) aims to generate a paragraph-length answer for a given question. While current work on LFQA using large pre-trained model for generation are effective at producing fluent and somewhat relevant content, one primary challenge lies in how to generate a faithful answer that has less hallucinated content. We propose a new end-to-end framework that jointly models answer generation and machine reading. The key idea is to augment the generation model with fine-grained, answer-related salient information which can be viewed as an emphasis on faithful facts. State-of-the-art results on two LFQA datasets, ELI5 and MS MARCO, demonstrate the effectiveness of our method, in comparison with strong baselines on automatic and human evaluation metrics. A detailed analysis further proves the competency of our methods in generating fluent, relevant, and more faithful answers.
null
null
10.18653/v1/2022.findings-acl.61
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,986
inproceedings
liu-etal-2022-simple
A Simple yet Effective Relation Information Guided Approach for Few-Shot Relation Extraction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.62/
Liu, Yang and Hu, Jinpeng and Wan, Xiang and Chang, Tsung-Hui
Findings of the Association for Computational Linguistics: ACL 2022
757--763
Few-Shot Relation Extraction aims at predicting the relation for a pair of entities in a sentence by training with a few labelled examples in each relation. Some recent works have introduced relation information (i.e., relation labels or descriptions) to assist model learning based on Prototype Network. However, most of them constrain the prototypes of each relation class implicitly with relation information, generally through designing complex network structures, like generating hybrid features, combining with contrastive learning or attention networks. We argue that relation information can be introduced more explicitly and effectively into the model. Thus, this paper proposes a direct addition approach to introduce relation information. Specifically, for each relation class, the relation representation is first generated by concatenating two views of relations (i.e., [CLS] token embedding and the mean value of embeddings of all tokens) and then directly added to the original prototype for both train and prediction. Experimental results on the benchmark dataset FewRel 1.0 show significant improvements and achieve comparable results to the state-of-the-art, which demonstrates the effectiveness of our proposed approach. Besides, further analyses verify that the direct addition is a much more effective way to integrate the relation representations and the original prototypes.
null
null
10.18653/v1/2022.findings-acl.62
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,987
inproceedings
khetan-etal-2022-mimicause
{MIMIC}ause: {R}epresentation and automatic extraction of causal relation types from clinical notes
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.63/
Khetan, Vivek and Rizvi, Md Imbesat and Huber, Jessica and Bartusiak, Paige and Sacaleanu, Bogdan and Fano, Andrew
Findings of the Association for Computational Linguistics: ACL 2022
764--773
Understanding causal narratives communicated in clinical notes can help make strides towards personalized healthcare. Extracted causal information from clinical notes can be combined with structured EHR data such as patients' demographics, diagnoses, and medications. This will enhance healthcare providers' ability to identify aspects of a patient`s story communicated in the clinical notes and help make more informed decisions. In this work, we propose annotation guidelines, develop an annotated corpus and provide baseline scores to identify types and direction of causal relations between a pair of biomedical concepts in clinical notes; communicated implicitly or explicitly, identified either in a single sentence or across multiple sentences. We annotate a total of 2714 de-identified examples sampled from the 2018 n2c2 shared task dataset and train four different language model based architectures. Annotation based on our guidelines achieved a high inter-annotator agreement i.e. Fleiss' kappa ($\kappa$) score of 0.72, and our model for identification of causal relations achieved a macro F1 score of 0.56 on the test data. The high inter-annotator agreement for clinical text shows the quality of our annotation guidelines while the provided baseline F1 score sets the direction for future research towards understanding narratives in clinical texts.
null
null
10.18653/v1/2022.findings-acl.63
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,988
inproceedings
zhao-etal-2022-compressing
Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.64/
Zhao, Xuandong and Yu, Zhiguo and Wu, Ming and Li, Lei
Findings of the Association for Computational Linguistics: ACL 2022
774--781
How to learn highly compact yet effective sentence representation? Pre-trained language models have been effective in many NLP tasks. However, these models are often huge and produce large sentence embeddings. Moreover, there is a big performance gap between large and small models. In this paper, we propose Homomorphic Projective Distillation (HPD) to learn compressed sentence embeddings. Our method augments a small Transformer encoder model with learnable projection layers to produce compact representations while mimicking a large pre-trained language model to retain the sentence representation quality. We evaluate our method with different model sizes on both semantic textual similarity (STS) and semantic retrieval (SR) tasks. Experiments show that our method achieves 2.7-4.5 points performance gain on STS tasks compared with previous best representations of the same size. In SR tasks, our method improves retrieval speed (8.2{\texttimes}) and memory usage (8.0{\texttimes}) compared with state-of-the-art large models. Our implementation is available at \url{https://github.com/XuandongZhao/HPD}.
null
null
10.18653/v1/2022.findings-acl.64
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,989
inproceedings
seo-etal-2022-debiasing
Debiasing Event Understanding for Visual Commonsense Tasks
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.65/
Seo, Minji and Jung, YeonJoon and Choi, Seungtaek and Hwang, Seung-won and Liu, Bei
Findings of the Association for Computational Linguistics: ACL 2022
782--787
We study event understanding as a critical step towards visual commonsense tasks. Meanwhile, we argue that current object-based event understanding is purely likelihood-based, leading to incorrect event prediction, due to biased correlation between events and objects. We propose to mitigate such biases with $do$-calculus, proposed in causality research, but overcoming its limited robustness, by an optimized aggregation with association-based prediction.We show the effectiveness of our approach, intrinsically by comparing our generated events with ground-truth event annotation, and extrinsically by downstream commonsense tasks.
null
null
10.18653/v1/2022.findings-acl.65
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,990
inproceedings
zhang-etal-2022-fact
Fact-Tree Reasoning for N-ary Question Answering over Knowledge Graphs
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.66/
Zhang, Yao and Li, Peiyao and Liang, Hongru and Jatowt, Adam and Yang, Zhenglu
Findings of the Association for Computational Linguistics: ACL 2022
788--802
Current Question Answering over Knowledge Graphs (KGQA) task mainly focuses on performing answer reasoning upon KGs with binary facts. However, it neglects the n-ary facts, which contain more than two entities. In this work, we highlight a more challenging but under-explored task: n-ary KGQA, i.e., answering n-ary facts questions upon n-ary KGs. Nevertheless, the multi-hop reasoning framework popular in binary KGQA task is not directly applicable on n-ary KGQA. We propose two feasible improvements: 1) upgrade the basic reasoning unit from entity or relation to fact, and 2) upgrade the reasoning structure from chain to tree. Therefore, we propose a novel fact-tree reasoning framework, FacTree, which integrates the above two upgrades. FacTree transforms the question into a fact tree and performs iterative fact reasoning on the fact tree to infer the correct answer. Experimental results on the n-ary KGQA dataset we constructed and two binary KGQA benchmarks demonstrate the effectiveness of FacTree compared with state-of-the-art methods.
null
null
10.18653/v1/2022.findings-acl.66
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,991
inproceedings
wang-etal-2022-deepstruct
{D}eep{S}truct: Pretraining of Language Models for Structure Prediction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.67/
Wang, Chenguang and Liu, Xiao and Chen, Zui and Hong, Haoyun and Tang, Jie and Song, Dawn
Findings of the Association for Computational Linguistics: ACL 2022
803--823
We introduce a method for improving the structural understanding abilities of language models. Unlike previous approaches that finetune the models with task-specific augmentation, we pretrain language models to generate structures from the text on a collection of task-agnostic corpora. Our structure pretraining enables zero-shot transfer of the learned knowledge that models have about the structure tasks. We study the performance of this approach on 28 datasets, spanning 10 structure prediction tasks including open information extraction, joint entity and relation extraction, named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, factual probe, intent detection, and dialogue state tracking. We further enhance the pretraining with the task-specific training sets. We show that a 10B parameter language model transfers non-trivially to most tasks and obtains state-of-the-art performance on 21 of 28 datasets that we evaluate. Our code and datasets will be made publicly available.
null
null
10.18653/v1/2022.findings-acl.67
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,992
inproceedings
atwell-etal-2022-change
The Change that Matters in Discourse Parsing: Estimating the Impact of Domain Shift on Parser Error
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.68/
Atwell, Katherine and Sicilia, Anthony and Hwang, Seong Jae and Alikhani, Malihe
Findings of the Association for Computational Linguistics: ACL 2022
824--845
Discourse analysis allows us to attain inferences of a text document that extend beyond the sentence-level. The current performance of discourse models is very low on texts outside of the training distribution`s coverage, diminishing the practical utility of existing models. There is need for a measure that can inform us to what extent our model generalizes from the training to the test sample when these samples may be drawn from distinct distributions. While this can be estimated via distribution shift, we argue that this does not directly correlate with change in the observed error of a classifier (i.e. error-gap). Thus, we propose to use a statistic from the theoretical domain adaptation literature which can be directly tied to error-gap. We study the bias of this statistic as an estimator of error-gap both theoretically and through a large-scale empirical study of over 2400 experiments on 6 discourse datasets from domains including, but not limited to: news, biomedical texts, TED talks, Reddit posts, and fiction. Our results not only motivate our proposal and help us to understand its limitations, but also provide insight on the properties of discourse models and datasets which improve performance in domain adaptation. For instance, we find that non-news datasets are slightly easier to transfer to than news datasets when the training and test sets are very different. Our code and an associated Python package are available to allow practitioners to make more informed model and dataset choices.
null
null
10.18653/v1/2022.findings-acl.68
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,993
inproceedings
safaya-etal-2022-mukayese
Mukayese: {T}urkish {NLP} Strikes Back
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.69/
Safaya, Ali and Kurtulu{\c{s}}, Emirhan and Goktogan, Arda and Yuret, Deniz
Findings of the Association for Computational Linguistics: ACL 2022
846--863
Having sufficient resources for language X lifts it from the under-resourced languages class, but not necessarily from the under-researched class. In this paper, we address the problem of the absence of organized benchmarks in the Turkish language. We demonstrate that languages such as Turkish are left behind the state-of-the-art in NLP applications. As a solution, we present Mukayese, a set of NLP benchmarks for the Turkish language that contains several NLP tasks. We work on one or more datasets for each benchmark and present two or more baselines. Moreover, we present four new benchmarking datasets in Turkish for language modeling, sentence segmentation, and spell checking. All datasets and baselines are available under: \url{https://github.com/alisafaya/mukayese}
null
null
10.18653/v1/2022.findings-acl.69
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,994
inproceedings
zhang-etal-2022-virtual
Virtual Augmentation Supported Contrastive Learning of Sentence Representations
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.70/
Zhang, Dejiao and Xiao, Wei and Zhu, Henghui and Ma, Xiaofei and Arnold, Andrew
Findings of the Association for Computational Linguistics: ACL 2022
864--876
Despite profound successes, contrastive representation learning relies on carefully designed data augmentations using domain-specific knowledge. This challenge is magnified in natural language processing, where no general rules exist for data augmentation due to the discrete nature of natural language. We tackle this challenge by presenting a Virtual augmentation Supported Contrastive Learning of sentence representations (VaSCL). Originating from the interpretation that data augmentation essentially constructs the neighborhoods of each training instance, we, in turn, utilize the neighborhood to generate effective data augmentations. Leveraging the large training batch size of contrastive learning, we approximate the neighborhood of an instance via its K-nearest in-batch neighbors in the representation space. We then define an instance discrimination task regarding the neighborhood and generate the virtual augmentation in an adversarial training manner. We access the performance of VaSCL on a wide range of downstream tasks and set a new state-of-the-art for unsupervised sentence representation learning.
null
null
10.18653/v1/2022.findings-acl.70
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,995
inproceedings
zhang-etal-2022-moefication
{M}o{E}fication: Transformer Feed-forward Layers are Mixtures of Experts
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.71/
Zhang, Zhengyan and Lin, Yankai and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie
Findings of the Association for Computational Linguistics: ACL 2022
877--890
Recent work has shown that feed-forward networks (FFNs) in pre-trained Transformers are a key component, storing various linguistic and factual knowledge. However, the computational patterns of FFNs are still unclear. In this work, we study the computational patterns of FFNs and observe that most inputs only activate a tiny ratio of neurons of FFNs. This phenomenon is similar to the sparsity of the human brain, which drives research on functional partitions of the human brain. To verify whether functional partitions also emerge in FFNs, we propose to convert a model into its MoE version with the same parameters, namely MoEfication. Specifically, MoEfication consists of two phases: (1) splitting the parameters of FFNs into multiple functional partitions as experts, and (2) building expert routers to decide which experts will be used for each input. Experimental results show that MoEfication can conditionally use 10{\%} to 30{\%} of FFN parameters while maintaining over 95{\%} original performance for different models on various downstream tasks. Besides, MoEfication brings two advantages: (1) it significantly reduces the FLOPS of inference, i.e., 2x speedup with 25{\%} of FFN parameters, and (2) it provides a fine-grained perspective to study the inner mechanism of FFNs. The source code of this paper can be obtained from \url{https://github.com/thunlp/MoEfication}.
null
null
10.18653/v1/2022.findings-acl.71
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,996
inproceedings
hung-etal-2022-ds
{DS}-{TOD}: Efficient Domain Specialization for Task-Oriented Dialog
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.72/
Hung, Chia-Chien and Lauscher, Anne and Ponzetto, Simone and Glava{\v{s}}, Goran
Findings of the Association for Computational Linguistics: ACL 2022
891--904
Recent work has shown that self-supervised dialog-specific pretraining on large conversational datasets yields substantial gains over traditional language modeling (LM) pretraining in downstream task-oriented dialog (TOD). These approaches, however, exploit general dialogic corpora (e.g., Reddit) and thus presumably fail to reliably embed domain-specific knowledge useful for concrete downstream TOD domains. In this work, we investigate the effects of domain specialization of pretrained language models (PLMs) for TOD. Within our DS-TOD framework, we first automatically extract salient domain-specific terms, and then use them to construct DomainCC and DomainReddit {--} resources that we leverage for domain-specific pretraining, based on (i) masked language modeling (MLM) and (ii) response selection (RS) objectives, respectively. We further propose a resource-efficient and modular domain specialization by means of domain adapters {--} additional parameter-light layers in which we encode the domain knowledge. Our experiments with prominent TOD tasks {--} dialog state tracking (DST) and response retrieval (RR) {--} encompassing five domains from the MultiWOZ benchmark demonstrate the effectiveness of DS-TOD. Moreover, we show that the light-weight adapter-based specialization (1) performs comparably to full fine-tuning in single domain setups and (2) is particularly suitable for multi-domain specialization, where besides advantageous computational footprint, it can offer better TOD performance.
null
null
10.18653/v1/2022.findings-acl.72
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,997
inproceedings
wang-etal-2022-distinguishing
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.73/
Wang, Jiayi and Bao, Rongzhou and Zhang, Zhuosheng and Zhao, Hai
Findings of the Association for Computational Linguistics: ACL 2022
905--915
Recently, the problem of robustness of pre-trained language models (PrLMs) has received increasing research interest. Latest studies on adversarial attacks achieve high attack success rates against PrLMs, claiming that PrLMs are not robust. However, we find that the adversarial samples that PrLMs fail are mostly non-natural and do not appear in reality. We question the validity of the current evaluation of robustness of PrLMs based on these non-natural adversarial samples and propose an anomaly detector to evaluate the robustness of PrLMs with more natural adversarial samples. We also investigate two applications of the anomaly detector: (1) In data augmentation, we employ the anomaly detector to force generating augmented data that are distinguished as non-natural, which brings larger gains to the accuracy of PrLMs. (2) We apply the anomaly detector to a defense framework to enhance the robustness of PrLMs. It can be used to defend all types of attacks and achieves higher accuracy on both adversarial samples and compliant samples than other defense frameworks.
null
null
10.18653/v1/2022.findings-acl.73
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,998
inproceedings
wang-etal-2022-learning
Learning Adaptive Axis Attentions in Fine-tuning: Beyond Fixed Sparse Attention Patterns
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.74/
Wang, Zihan and Gu, Jiuxiang and Kuen, Jason and Zhao, Handong and Morariu, Vlad and Zhang, Ruiyi and Nenkova, Ani and Sun, Tong and Shang, Jingbo
Findings of the Association for Computational Linguistics: ACL 2022
916--925
We present a comprehensive study of sparse attention patterns in Transformer models. We first question the need for pre-training with sparse attention and present experiments showing that an efficient fine-tuning only approach yields a slightly worse but still competitive model. Then we compare the widely used local attention pattern and the less-well-studied global attention pattern, demonstrating that global patterns have several unique advantages. We also demonstrate that a flexible approach to attention, with different patterns across different layers of the model, is beneficial for some tasks. Drawing on this insight, we propose a novel Adaptive Axis Attention method, which learns{---}during fine-tuning{---}different attention patterns for each Transformer layer depending on the downstream task. Rather than choosing a fixed attention pattern, the adaptive axis attention method identifies important tokens{---}for each task and model layer{---}and focuses attention on those. It does not require pre-training to accommodate the sparse patterns and demonstrates competitive and sometimes better performance against fixed sparse attention patterns that require resource-intensive pre-training.
null
null
10.18653/v1/2022.findings-acl.74
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,999
inproceedings
li-etal-2022-using
Using Interactive Feedback to Improve the Accuracy and Explainability of Question Answering Systems Post-Deployment
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.75/
Li, Zichao and Sharma, Prakhar and Lu, Xing Han and Cheung, Jackie and Reddy, Siva
Findings of the Association for Computational Linguistics: ACL 2022
926--937
Most research on question answering focuses on the pre-deployment stage; i.e., building an accurate model for deployment. In this paper, we ask the question: Can we improve QA systems further post-deployment based on user interactions? We focus on two kinds of improvements: 1) improving the QA system`s performance itself, and 2) providing the model with the ability to explain the correctness or incorrectness of an answer. We collect a retrieval-based QA dataset, FeedbackQA, which contains interactive feedback from users. We collect this dataset by deploying a base QA system to crowdworkers who then engage with the system and provide feedback on the quality of its answers. The feedback contains both structured ratings and unstructured natural language explanations. We train a neural model with this feedback data that can generate explanations and re-score answer candidates. We show that feedback data not only improves the accuracy of the deployed QA system but also other stronger non-deployed systems. The generated explanations also help users make informed decisions about the correctness of answers.
null
null
10.18653/v1/2022.findings-acl.75
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,000
inproceedings
ferreira-etal-2022-integer
To be or not to be an Integer? Encoding Variables for Mathematical Text
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.76/
Ferreira, Deborah and Thayaparan, Mokanarangan and Valentino, Marco and Rozanova, Julia and Freitas, Andre
Findings of the Association for Computational Linguistics: ACL 2022
938--948
The application of Natural Language Inference (NLI) methods over large textual corpora can facilitate scientific discovery, reducing the gap between current research and the available large-scale scientific knowledge. However, contemporary NLI models are still limited in interpreting mathematical knowledge written in Natural Language, even though mathematics is an integral part of scientific argumentation for many disciplines. One of the fundamental requirements towards mathematical language understanding, is the creation of models able to meaningfully represent variables. This problem is particularly challenging since the meaning of a variable should be assigned exclusively from its defining type, i.e., the representation of a variable should come from its context. Recent research has formalised the variable typing task, a benchmark for the understanding of abstract mathematical types and variables in a sentence. In this work, we propose VarSlot, a Variable Slot-based approach, which not only delivers state-of-the-art results in the task of variable typing, but is also able to create context-based representations for variables.
null
null
10.18653/v1/2022.findings-acl.76
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,001
inproceedings
dehghan-etal-2022-grs
{GRS}: Combining Generation and Revision in Unsupervised Sentence Simplification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.77/
Dehghan, Mohammad and Kumar, Dhruv and Golab, Lukasz
Findings of the Association for Computational Linguistics: ACL 2022
949--960
We propose GRS: an unsupervised approach to sentence simplification that combines text generation and text revision. We start with an iterative framework in which an input sentence is revised using explicit edit operations, and add paraphrasing as a new edit operation. This allows us to combine the advantages of generative and revision-based approaches: paraphrasing captures complex edit operations, and the use of explicit edit operations in an iterative manner provides controllability and interpretability. We demonstrate these advantages of GRS compared to existing methods on the Newsela and ASSET datasets.
null
null
10.18653/v1/2022.findings-acl.77
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,002
inproceedings
mager-etal-2022-bpe
{BPE} vs. Morphological Segmentation: A Case Study on Machine Translation of Four Polysynthetic Languages
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.78/
Mager, Manuel and Oncevay, Arturo and Mager, Elisabeth and Kann, Katharina and Vu, Thang
Findings of the Association for Computational Linguistics: ACL 2022
961--971
Morphologically-rich polysynthetic languages present a challenge for NLP systems due to data sparsity, and a common strategy to handle this issue is to apply subword segmentation. We investigate a wide variety of supervised and unsupervised morphological segmentation methods for four polysynthetic languages: Nahuatl, Raramuri, Shipibo-Konibo, and Wixarika. Then, we compare the morphologically inspired segmentation methods against Byte-Pair Encodings (BPEs) as inputs for machine translation (MT) when translating to and from Spanish. We show that for all language pairs except for Nahuatl, an unsupervised morphological segmentation algorithm outperforms BPEs consistently and that, although supervised methods achieve better segmentation scores, they under-perform in MT challenges. Finally, we contribute two new morphological segmentation datasets for Raramuri and Shipibo-Konibo, and a parallel corpus for Raramuri{--}Spanish.
null
null
10.18653/v1/2022.findings-acl.78
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,003
inproceedings
zhou-etal-2022-distributed
Distributed {NLI}: Learning to Predict Human Opinion Distributions for Language Reasoning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.79/
Zhou, Xiang and Nie, Yixin and Bansal, Mohit
Findings of the Association for Computational Linguistics: ACL 2022
972--987
We introduce distributed NLI, a new NLU task with a goal to predict the distribution of human judgements for natural language inference. We show that by applying additional distribution estimation methods, namely, Monte Carlo (MC) Dropout, Deep Ensemble, Re-Calibration, and Distribution Distillation, models can capture human judgement distribution more effectively than the softmax baseline. We show that MC Dropout is able to achieve decent performance without any distribution annotations while Re-Calibration can give further improvements with extra distribution annotations, suggesting the value of multiple annotations for one example in modeling the distribution of human judgements. Despite these improvements, the best results are still far below the estimated human upper-bound, indicating that predicting the distribution of human judgements is still an open, challenging problem with a large room for improvements. We showcase the common errors for MC Dropout and Re-Calibration. Finally, we give guidelines on the usage of these methods with different levels of data availability and encourage future work on modeling the human opinion distribution for language reasoning.
null
null
10.18653/v1/2022.findings-acl.79
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,004
inproceedings
wiemerslage-etal-2022-morphological
Morphological Processing of Low-Resource Languages: Where We Are and What`s Next
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.80/
Wiemerslage, Adam and Silfverberg, Miikka and Yang, Changbing and McCarthy, Arya and Nicolai, Garrett and Colunga, Eliana and Kann, Katharina
Findings of the Association for Computational Linguistics: ACL 2022
988--1007
Automatic morphological processing can aid downstream natural language processing applications, especially for low-resource languages, and assist language documentation efforts for endangered languages. Having long been multilingual, the field of computational morphology is increasingly moving towards approaches suitable for languages with minimal or no annotated resources. First, we survey recent developments in computational morphology with a focus on low-resource languages. Second, we argue that the field is ready to tackle the logical next challenge: understanding a language`s morphology from raw text alone. We perform an empirical study on a truly unsupervised version of the paradigm completion task and show that, while existing state-of-the-art models bridged by two newly proposed models we devise perform reasonably, there is still much room for improvement. The stakes are high: solving this task will increase the language coverage of morphological resources by a number of magnitudes.
null
null
10.18653/v1/2022.findings-acl.80
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,005
inproceedings
inoue-etal-2022-learning
Learning and Evaluating Character Representations in Novels
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.81/
Inoue, Naoya and Pethe, Charuta and Kim, Allen and Skiena, Steven
Findings of the Association for Computational Linguistics: ACL 2022
1008--1019
We address the problem of learning fixed-length vector representations of characters in novels. Recent advances in word embeddings have proven successful in learning entity representations from short texts, but fall short on longer documents because they do not capture full book-level information. To overcome the weakness of such text-based embeddings, we propose two novel methods for representing characters: (i) graph neural network-based embeddings from a full corpus-based character network; and (ii) low-dimensional embeddings constructed from the occurrence pattern of characters in each novel. We test the quality of these character embeddings using a new benchmark suite to evaluate character representations, encompassing 12 different tasks. We show that our representation techniques combined with text-based embeddings lead to the best character representations, outperforming text-based embeddings in four tasks. Our dataset and evaluation script will be made publicly available to stimulate additional work in this area.
null
null
10.18653/v1/2022.findings-acl.81
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,006
inproceedings
raina-gales-2022-answer
Answer Uncertainty and Unanswerability in Multiple-Choice Machine Reading Comprehension
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.82/
Raina, Vatsal and Gales, Mark
Findings of the Association for Computational Linguistics: ACL 2022
1020--1034
Machine reading comprehension (MRC) has drawn a lot of attention as an approach for assessing the ability of systems to understand natural language. Usually systems focus on selecting the correct answer to a question given a contextual paragraph. However, for many applications of multiple-choice MRC systems there are two additional considerations. For multiple-choice exams there is often a negative marking scheme; there is a penalty for an incorrect answer. In terms of an MRC system this means that the system is required to have an idea of the uncertainty in the predicted answer. The second consideration is that many multiple-choice questions have the option of none-of-the-above (NOA) indicating that none of the answers is applicable, rather than there always being the correct answer in the list of choices. This paper investigates both of these issues by making use of predictive uncertainty. Whether the system should propose an answer is a direct application of answer uncertainty. There are two possibilities when considering the NOA option. The simplest is to explicitly build a system on data that includes this option. Alternatively uncertainty can be applied to detect whether the other options include the correct answer. If the system is not sufficiently confident it will select NOA. As there is no standard corpus available to investigate these topics, the ReClor corpus is modified by removing the correct answer from a subset of possible answers. A high-performance MRC system is used to evaluate whether answer uncertainty can be applied in these situations. It is shown that uncertainty does allow questions that the system is not confident about to be detected. Additionally it is shown that uncertainty outperforms a system explicitly built with an NOA option.
null
null
10.18653/v1/2022.findings-acl.82
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,007
inproceedings
reuel-etal-2022-measuring
Measuring the Language of Self-Disclosure across Corpora
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.83/
Reuel, Ann-Katrin and Peralta, Sebastian and Sedoc, Jo{\~a}o and Sherman, Garrick and Ungar, Lyle
Findings of the Association for Computational Linguistics: ACL 2022
1035--1047
Being able to reliably estimate self-disclosure {--} a key component of friendship and intimacy {--} from language is important for many psychology studies. We build single-task models on five self-disclosure corpora, but find that these models generalize poorly; the within-domain accuracy of predicted message-level self-disclosure of the best-performing model (mean Pearson`s r=0.69) is much higher than the respective across data set accuracy (mean Pearson`s r=0.32), due to both variations in the corpora (e.g., medical vs. general topics) and labeling instructions (target variables: self-disclosure, emotional disclosure, intimacy). However, some lexical features, such as expression of negative emotions and use of first person personal pronouns such as {\textquoteleft}I' reliably predict self-disclosure across corpora. We develop a multi-task model that yields better results, with an average Pearson`s r of 0.37 for out-of-corpora prediction.
null
null
10.18653/v1/2022.findings-acl.83
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,008
inproceedings
kamalloo-etal-2022-chosen
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.84/
Kamalloo, Ehsan and Rezagholizadeh, Mehdi and Ghodsi, Ali
Findings of the Association for Computational Linguistics: ACL 2022
1048--1062
Data Augmentation (DA) is known to improve the generalizability of deep neural networks. Most existing DA techniques naively add a certain number of augmented samples without considering the quality and the added computational cost of these samples. To tackle this problem, a common strategy, adopted by several state-of-the-art DA methods, is to adaptively generate or re-weight augmented samples with respect to the task objective during training. However, these adaptive DA methods: (1) are computationally expensive and not sample-efficient, and (2) are designed merely for a specific setting. In this work, we present a universal DA technique, called Glitter, to overcome both issues. Glitter can be plugged into any DA method, making training sample-efficient without sacrificing performance. From a pre-generated pool of augmented samples, Glitter adaptively selects a subset of worst-case samples with maximal loss, analogous to adversarial DA. Without altering the training strategy, the task objective can be optimized on the selected subset. Our thorough experiments on the GLUE benchmark, SQuAD, and HellaSwag in three widely used training setups including consistency training, self-distillation and knowledge distillation reveal that Glitter is substantially faster to train and achieves a competitive performance, compared to strong baselines.
null
null
10.18653/v1/2022.findings-acl.84
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,009
inproceedings
ronnqvist-etal-2022-explaining
Explaining Classes through Stable Word Attributions
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.85/
R{\"onnqvist, Samuel and Kyr{\"ol{\"ainen, Aki-Juhani and Myntti, Amanda and Ginter, Filip and Laippala, Veronika
Findings of the Association for Computational Linguistics: ACL 2022
1063--1074
Input saliency methods have recently become a popular tool for explaining predictions of deep learning models in NLP. Nevertheless, there has been little work investigating methods for aggregating prediction-level explanations to the class level, nor has a framework for evaluating such class explanations been established. We explore explanations based on XLM-R and the Integrated Gradients input attribution method, and propose 1) the Stable Attribution Class Explanation method (SACX) to extract keyword lists of classes in text classification tasks, and 2) a framework for the systematic evaluation of the keyword lists. We find that explanations of individual predictions are prone to noise, but that stable explanations can be effectively identified through repeated training and explanation. We evaluate on web register data and show that the class explanations are linguistically meaningful and distinguishing of the classes.
null
null
10.18653/v1/2022.findings-acl.85
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,010
inproceedings
carton-etal-2022-learn
What to Learn, and How: {T}oward Effective Learning from Rationales
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.86/
Carton, Samuel and Kanoria, Surya and Tan, Chenhao
Findings of the Association for Computational Linguistics: ACL 2022
1075--1088
Learning from rationales seeks to augment model prediction accuracy using human-annotated rationales (i.e. subsets of input tokens) that justify their chosen labels, often in the form of intermediate or multitask supervision. While intuitive, this idea has proven elusive in practice. We make two observations about human rationales via empirical analyses:1) maximizing rationale supervision accuracy is not necessarily the optimal objective for improving model accuracy; 2) human rationales vary in whether they provide sufficient information for the model to exploit for prediction. Building on these insights, we propose several novel loss functions and learning strategies, and evaluate their effectiveness on three datasets with human rationales. Our results demonstrate consistent improvements over baselines in both label and rationale accuracy, including a 3{\%} accuracy improvement on MultiRC. Our work highlights the importance of understanding properties of human explanations and exploiting them accordingly in model training.
null
null
10.18653/v1/2022.findings-acl.86
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,011
inproceedings
maronikolakis-etal-2022-listening
Listening to Affected Communities to Define Extreme Speech: Dataset and Experiments
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.87/
Maronikolakis, Antonis and Wisiorek, Axel and Nann, Leah and Jabbar, Haris and Udupa, Sahana and Schuetze, Hinrich
Findings of the Association for Computational Linguistics: ACL 2022
1089--1104
Building on current work on multilingual hate speech (e.g., Ousidhoum et al. (2019)) and hate speech reduction (e.g., Sap et al. (2020)), we present XTREMESPEECH, a new hate speech dataset containing 20,297 social media passages from Brazil, Germany, India and Kenya. The key novelty is that we directly involve the affected communities in collecting and annotating the data {--} as opposed to giving companies and governments control over defining and combatting hate speech. This inclusive approach results in datasets more representative of actually occurring online speech and is likely to facilitate the removal of the social media content that marginalized communities view as causing the most harm. Based on XTREMESPEECH, we establish novel tasks with accompanying baselines, provide evidence that cross-country training is generally not feasible due to cultural differences between countries and perform an interpretability analysis of BERT`s predictions.
null
null
10.18653/v1/2022.findings-acl.87
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,012
inproceedings
attanasio-etal-2022-entropy
Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.88/
Attanasio, Giuseppe and Nozza, Debora and Hovy, Dirk and Baralis, Elena
Findings of the Association for Computational Linguistics: ACL 2022
1105--1119
Natural Language Processing (NLP) models risk overfitting to specific terms in the training data, thereby reducing their performance, fairness, and generalizability. E.g., neural hate speech detection models are strongly influenced by identity terms like gay, or women, resulting in false positives, severe unintended bias, and lower performance. Most mitigation techniques use lists of identity terms or samples from the target domain during training. However, this approach requires a-priori knowledge and introduces further bias if important terms are neglected. Instead, we propose a knowledge-free Entropy-based Attention Regularization (EAR) to discourage overfitting to training-specific terms. An additional objective function penalizes tokens with low self-attention entropy. We fine-tune BERT via EAR: the resulting model matches or exceeds state-of-the-art performance for hate speech classification and bias metrics on three benchmark corpora in English and Italian.EAR also reveals overfitting terms, i.e., terms most likely to induce bias, to help identify their effect on the model, task, and predictions.
null
null
10.18653/v1/2022.findings-acl.88
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,013
inproceedings
schuster-hegelich-2022-berts
From {BERT}{\textquoteleft}s {P}oint of {V}iew: {R}evealing the {P}revailing {C}ontextual {D}ifferences
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.89/
Schuster, Carolin M. and Hegelich, Simon
Findings of the Association for Computational Linguistics: ACL 2022
1120--1138
Though successfully applied in research and industry large pretrained language models of the BERT family are not yet fully understood. While much research in the field of BERTology has tested whether specific knowledge can be extracted from layer activations, we invert the popular probing design to analyze the prevailing differences and clusters in BERT`s high dimensional space. By extracting coarse features from masked token representations and predicting them by probing models with access to only partial information we can apprehend the variation from {\textquoteleft}BERT`s point of view'. By applying our new methodology to different datasets we show how much the differences can be described by syntax but further how they are to a great extent shaped by the most simple positional information.
null
null
10.18653/v1/2022.findings-acl.89
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,014
inproceedings
an-etal-2022-learning
Learning Bias-reduced Word Embeddings Using Dictionary Definitions
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.90/
An, Haozhe and Liu, Xiaojiang and Zhang, Donald
Findings of the Association for Computational Linguistics: ACL 2022
1139--1152
Pre-trained word embeddings, such as GloVe, have shown undesirable gender, racial, and religious biases. To address this problem, we propose DD-GloVe, a train-time debiasing algorithm to learn word embeddings by leveraging $\underline{d}$ictionary $\underline{d}$efinitions. We introduce dictionary-guided loss functions that encourage word embeddings to be similar to their relatively neutral dictionary definition representations. Existing debiasing algorithms typically need a pre-compiled list of seed words to represent the bias direction, along which biased information gets removed. Producing this list involves subjective decisions and it might be difficult to obtain for some types of biases. We automate the process of finding seed words: our algorithm starts from a single pair of initial seed words and automatically finds more words whose definitions display similar attributes traits. We demonstrate the effectiveness of our approach with benchmark evaluations and empirical analyses. Our code is available at \url{https://github.com/haozhe-an/DD-GloVe}.
null
null
10.18653/v1/2022.findings-acl.90
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,015
inproceedings
yang-etal-2022-knowledge
Knowledge Graph Embedding by Adaptive Limit Scoring Loss Using Dynamic Weighting Strategy
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.91/
Yang, Jinfa and Ying, Xianghua and Shi, Yongjie and Tong, Xin and Wang, Ruibin and Chen, Taiyan and Xing, Bowei
Findings of the Association for Computational Linguistics: ACL 2022
1153--1163
Knowledge graph embedding aims to represent entities and relations as low-dimensional vectors, which is an effective way for predicting missing links in knowledge graphs. Designing a strong and effective loss framework is essential for knowledge graph embedding models to distinguish between correct and incorrect triplets. The classic margin-based ranking loss limits the scores of positive and negative triplets to have a suitable margin. The recently proposed Limit-based Scoring Loss independently limits the range of positive and negative triplet scores. However, these loss frameworks use equal or fixed penalty terms to reduce the scores of positive and negative sample pairs, which is inflexible in optimization. Our intuition is that if a triplet score deviates far from the optimum, it should be emphasized. To this end, we propose Adaptive Limit Scoring Loss, which simply re-weights each triplet to highlight the less-optimized triplet scores. We apply this loss framework to several knowledge graph embedding models such as TransE, TransH and ComplEx. The experimental results on link prediction and triplet classification show that our proposed method has achieved performance on par with the state of the art.
null
null
10.18653/v1/2022.findings-acl.91
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,016
inproceedings
ignat-etal-2022-ocr
{OCR} Improves Machine Translation for Low-Resource Languages
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.92/
Ignat, Oana and Maillard, Jean and Chaudhary, Vishrav and Guzm{\'a}n, Francisco
Findings of the Association for Computational Linguistics: ACL 2022
1164--1174
We aim to investigate the performance of current OCR systems on low resource languages and low resource scripts. We introduce and make publicly available a novel benchmark, OCR4MT, consisting of real and synthetic data, enriched with noise, for 60 low-resource languages in low resource scripts. We evaluate state-of-the-art OCR systems on our benchmark and analyse most common errors. We show that OCR monolingual data is a valuable resource that can increase performance of Machine Translation models, when used in backtranslation. We then perform an ablation study to investigate how OCR errors impact Machine Translation performance and determine what is the minimum level of OCR quality needed for the monolingual data to be useful for Machine Translation.
null
null
10.18653/v1/2022.findings-acl.92
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,017
inproceedings
yu-etal-2022-cocolm
{C}o{C}o{LM}: Complex Commonsense Enhanced Language Model with Discourse Relations
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.93/
Yu, Changlong and Zhang, Hongming and Song, Yangqiu and Ng, Wilfred
Findings of the Association for Computational Linguistics: ACL 2022
1175--1187
Large-scale pre-trained language models have demonstrated strong knowledge representation ability. However, recent studies suggest that even though these giant models contain rich simple commonsense knowledge (e.g., bird can fly and fish can swim.), they often struggle with complex commonsense knowledge that involves multiple eventualities (verb-centric phrases, e.g., identifying the relationship between {\textquotedblleft}Jim yells at Bob{\textquotedblright} and {\textquotedblleft}Bob is upset{\textquotedblright}). To address this issue, in this paper, we propose to help pre-trained language models better incorporate complex commonsense knowledge. Unlike direct fine-tuning approaches, we do not focus on a specific task and instead propose a general language model named CoCoLM. Through the careful training over a large-scale eventuality knowledge graph ASER, we successfully teach pre-trained language models (i.e., BERT and RoBERTa) rich multi-hop commonsense knowledge among eventualities. Experiments on multiple commonsense tasks that require the correct understanding of eventualities demonstrate the effectiveness of CoCoLM.
null
null
10.18653/v1/2022.findings-acl.93
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,018
inproceedings
maheshwari-etal-2022-learning
Learning to Robustly Aggregate Labeling Functions for Semi-supervised Data Programming
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.94/
Maheshwari, Ayush and Killamsetty, Krishnateja and Ramakrishnan, Ganesh and Iyer, Rishabh and Danilevsky, Marina and Popa, Lucian
Findings of the Association for Computational Linguistics: ACL 2022
1188--1202
A critical bottleneck in supervised machine learning is the need for large amounts of labeled data which is expensive and time-consuming to obtain. Although a small amount of labeled data cannot be used to train a model, it can be used effectively for the generation of humaninterpretable labeling functions (LFs). These LFs, in turn, have been used to generate a large amount of additional noisy labeled data in a paradigm that is now commonly referred to as data programming. Previous methods of generating LFs do not attempt to use the given labeled data further to train a model, thus missing opportunities for improving performance. Additionally, since the LFs are generated automatically, they are likely to be noisy, and naively aggregating these LFs can lead to suboptimal results. In this work, we propose an LF-based bi-level optimization framework WISDOM to solve these two critical limitations. WISDOM learns a joint model on the (same) labeled dataset used for LF induction along with any unlabeled data in a semi-supervised manner, and more critically, reweighs each LF according to its goodness, influencing its contribution to the semi-supervised loss using a robust bi-level optimization algorithm. We show that WISDOM significantly outperforms prior approaches on several text classification datasets.
null
null
10.18653/v1/2022.findings-acl.94
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,019
inproceedings
bao-etal-2022-multi
Multi-Granularity Semantic Aware Graph Model for Reducing Position Bias in Emotion Cause Pair Extraction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.95/
Bao, Yinan and Ma, Qianwen and Wei, Lingwei and Zhou, Wei and Hu, Songlin
Findings of the Association for Computational Linguistics: ACL 2022
1203--1213
The emotion cause pair extraction (ECPE) task aims to extract emotions and causes as pairs from documents. We observe that the relative distance distribution of emotions and causes is extremely imbalanced in the typical ECPE dataset. Existing methods have set a fixed size window to capture relations between neighboring clauses. However, they neglect the effective semantic connections between distant clauses, leading to poor generalization ability towards position-insensitive data. To alleviate the problem, we propose a novel $\textbf{M}$ulti-$\textbf{G}$ranularity $\textbf{S}$emantic $\textbf{A}$ware $\textbf{G}$raph model (MGSAG) to incorporate fine-grained and coarse-grained semantic features jointly, without regard to distance limitation. In particular, we first explore semantic dependencies between clauses and keywords extracted from the document that convey fine-grained semantic features, obtaining keywords enhanced clause representations. Besides, a clause graph is also established to model coarse-grained semantic relations between clauses. Experimental results indicate that MGSAG surpasses the existing state-of-the-art ECPE models. Especially, MGSAG outperforms other models significantly in the condition of position-insensitive data.
null
null
10.18653/v1/2022.findings-acl.95
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,020
inproceedings
li-etal-2022-cross
Cross-lingual Inference with A {C}hinese Entailment Graph
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.96/
Li, Tianyi and Weber, Sabine and Hosseini, Mohammad Javad and Guillou, Liane and Steedman, Mark
Findings of the Association for Computational Linguistics: ACL 2022
1214--1233
Predicate entailment detection is a crucial task for question-answering from text, where previous work has explored unsupervised learning of entailment graphs from typed open relation triples. In this paper, we present the first pipeline for building Chinese entailment graphs, which involves a novel high-recall open relation extraction (ORE) method and the first Chinese fine-grained entity typing dataset under the FIGER type ontology. Through experiments on the Levy-Holt dataset, we verify the strength of our Chinese entailment graph, and reveal the cross-lingual complementarity: on the parallel Levy-Holt dataset, an ensemble of Chinese and English entailment graphs outperforms both monolingual graphs, and raises unsupervised SOTA by 4.7 AUC points.
null
null
10.18653/v1/2022.findings-acl.96
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,021
inproceedings
xie-etal-2022-multi
Multi-task Learning for Paraphrase Generation With Keyword and Part-of-Speech Reconstruction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.97/
Xie, Xuhang and Lu, Xuesong and Chen, Bei
Findings of the Association for Computational Linguistics: ACL 2022
1234--1243
Paraphrase generation using deep learning has been a research hotspot of natural language processing in the past few years. While previous studies tackle the problem from different aspects, the essence of paraphrase generation is to retain the key semantics of the source sentence and rewrite the rest of the content. Inspired by this observation, we propose a novel two-stage model, PGKPR, for paraphrase generation with keyword and part-of-speech reconstruction. The rationale is to capture simultaneously the possible keywords of a source sentence and the relations between them to facilitate the rewriting. In the first stage, we identify the possible keywords using a prediction attribution technique, where the words obtaining higher attribution scores are more likely to be the keywords. In the second stage, we train a transformer-based model via multi-task learning for paraphrase generation. The novel learning task is the reconstruction of the keywords and part-of-speech tags, respectively, from a perturbed sequence of the source sentence. The learned encodings are then decoded to generate the paraphrase. We conduct the experiments on two commonly-used datasets, and demonstrate the superior performance of PGKPR over comparative models on multiple evaluation metrics.
null
null
10.18653/v1/2022.findings-acl.97
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,022
inproceedings
zhu-etal-2022-mdcspell
{MDCS}pell: A Multi-task Detector-Corrector Framework for {C}hinese Spelling Correction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.98/
Zhu, Chenxi and Ying, Ziqiang and Zhang, Boyu and Mao, Feng
Findings of the Association for Computational Linguistics: ACL 2022
1244--1253
Chinese Spelling Correction (CSC) is a task to detect and correct misspelled characters in Chinese texts. CSC is challenging since many Chinese characters are visually or phonologically similar but with quite different semantic meanings. Many recent works use BERT-based language models to directly correct each character of the input sentence. However, these methods can be sub-optimal since they correct every character of the sentence only by the context which is easily negatively affected by the misspelled characters. Some other works propose to use an error detector to guide the correction by masking the detected errors. Nevertheless, these methods dampen the visual or phonological features from the misspelled characters which could be critical for correction. In this work, we propose a novel general detector-corrector multi-task framework where the corrector uses BERT to capture the visual and phonological features from each character in the raw sentence and uses a late fusion strategy to fuse the hidden states of the corrector with that of the detector to minimize the negative impact from the misspelled characters. Comprehensive experiments on benchmarks demonstrate that our proposed method can significantly outperform the state-of-the-art methods in the CSC task.
null
null
10.18653/v1/2022.findings-acl.98
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,023
inproceedings
hui-etal-2022-s2sql
{S}$^2${SQL}: Injecting Syntax to Question-Schema Interaction Graph Encoder for Text-to-{SQL} Parsers
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.99/
Hui, Binyuan and Geng, Ruiying and Wang, Lihan and Qin, Bowen and Li, Yanyang and Li, Bowen and Sun, Jian and Li, Yongbin
Findings of the Association for Computational Linguistics: ACL 2022
1254--1262
The task of converting a natural language question into an executable SQL query, known as text-to-SQL, is an important branch of semantic parsing. The state-of-the-art graph-based encoder has been successfully used in this task but does not model the question syntax well. In this paper, we propose S$^2$SQL, injecting Syntax to question-Schema graph encoder for Text-to-SQL parsers, which effectively leverages the syntactic dependency information of questions in text-to-SQL to improve the performance. We also employ the decoupling constraint to induce diverse relational edge embedding, which further improves the network`s performance. Experiments on the Spider and robustness setting Spider-Syn demonstrate that the proposed approach outperforms all existing methods when pre-training models are used, resulting in a performance ranks first on the Spider leaderboard.
null
null
10.18653/v1/2022.findings-acl.99
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,024
inproceedings
felice-etal-2022-constructing
Constructing Open Cloze Tests Using Generation and Discrimination Capabilities of Transformers
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.100/
Felice, Mariano and Taslimipoor, Shiva and Buttery, Paula
Findings of the Association for Computational Linguistics: ACL 2022
1263--1273
This paper presents the first multi-objective transformer model for generating open cloze tests that exploits generation and discrimination capabilities to improve performance. Our model is further enhanced by tweaking its loss function and applying a post-processing re-ranking algorithm that improves overall test structure. Experiments using automatic and human evaluation show that our approach can achieve up to 82{\%} accuracy according to experts, outperforming previous work and baselines. We also release a collection of high-quality open cloze tests along with sample system output and human annotations that can serve as a future benchmark.
null
null
10.18653/v1/2022.findings-acl.100
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,025
inproceedings
maveli-cohen-2022-co
{C}o-training an {U}nsupervised {C}onstituency {P}arser with {W}eak {S}upervision
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.101/
Maveli, Nickil and Cohen, Shay
Findings of the Association for Computational Linguistics: ACL 2022
1274--1291
We introduce a method for unsupervised parsing that relies on bootstrapping classifiers to identify if a node dominates a specific span in a sentence. There are two types of classifiers, an inside classifier that acts on a span, and an outside classifier that acts on everything outside of a given span. Through self-training and co-training with the two classifiers, we show that the interplay between them helps improve the accuracy of both, and as a result, effectively parse. A seed bootstrapping technique prepares the data to train these classifiers. Our analyses further validate that such an approach in conjunction with weak supervision using prior branching knowledge of a known language (left/right-branching) and minimal heuristics injects strong inductive bias into the parser, achieving 63.1 F$_1$ on the English (PTB) test set. In addition, we show the effectiveness of our architecture by evaluating on treebanks for Chinese (CTB) and Japanese (KTB) and achieve new state-of-the-art results.
null
null
10.18653/v1/2022.findings-acl.101
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,026
inproceedings
ruan-etal-2022-histruct
{H}i{S}truct+: Improving Extractive Text Summarization with Hierarchical Structure Information
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.102/
Ruan, Qian and Ostendorff, Malte and Rehm, Georg
Findings of the Association for Computational Linguistics: ACL 2022
1292--1308
Transformer-based language models usually treat texts as linear sequences. However, most texts also have an inherent hierarchical structure, i.e., parts of a text can be identified using their position in this hierarchy. In addition, section titles usually indicate the common topic of their respective sentences. We propose a novel approach to formulate, extract, encode and inject hierarchical structure information explicitly into an extractive summarization model based on a pre-trained, encoder-only Transformer language model (HiStruct+ model), which improves SOTA ROUGEs for extractive summarization on PubMed and arXiv substantially. Using various experimental settings on three datasets (i.e., CNN/DailyMail, PubMed and arXiv), our HiStruct+ model outperforms a strong baseline collectively, which differs from our model only in that the hierarchical structure information is not injected. It is also observed that the more conspicuous hierarchical structure the dataset has, the larger improvements our method gains. The ablation study demonstrates that the hierarchical position information is the main contributor to our model`s SOTA performance.
null
null
10.18653/v1/2022.findings-acl.102
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,027
inproceedings
rajaee-pilehvar-2022-isotropy
An Isotropy Analysis in the Multilingual {BERT} Embedding Space
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.103/
Rajaee, Sara and Pilehvar, Mohammad Taher
Findings of the Association for Computational Linguistics: ACL 2022
1309--1316
Several studies have explored various advantages of multilingual pre-trained models (such as multilingual BERT) in capturing shared linguistic knowledge. However, less attention has been paid to their limitations. In this paper, we investigate the multilingual BERT for two known issues of the monolingual models: anisotropic embedding space and outlier dimensions. We show that, unlike its monolingual counterpart, the multilingual BERT model exhibits no outlier dimension in its representations while it has a highly anisotropic space. There are a few dimensions in the monolingual BERT with high contributions to the anisotropic distribution. However, we observe no such dimensions in the multilingual BERT. Furthermore, our experimental results demonstrate that increasing the isotropy of multilingual space can significantly improve its representation power and performance, similarly to what had been observed for monolingual CWRs on semantic similarity tasks. Our analysis indicates that, despite having different degenerated directions, the embedding spaces in various languages tend to be partially similar with respect to their structures.
null
null
10.18653/v1/2022.findings-acl.103
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,028
inproceedings
liu-etal-2022-multi
Multi-Stage Prompting for Knowledgeable Dialogue Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.104/
Liu, Zihan and Patwary, Mostofa and Prenger, Ryan and Prabhumoye, Shrimai and Ping, Wei and Shoeybi, Mohammad and Catanzaro, Bryan
Findings of the Association for Computational Linguistics: ACL 2022
1317--1337
Existing knowledge-grounded dialogue systems typically use finetuned versions of a pretrained language model (LM) and large-scale knowledge bases. These models typically fail to generalize on topics outside of the knowledge base, and require maintaining separate potentially large checkpoints each time finetuning is needed. In this paper, we aim to address these limitations by leveraging the inherent knowledge stored in the pretrained LM as well as its powerful generation ability. We propose a multi-stage prompting approach to generate knowledgeable responses from a single pretrained LM. We first prompt the LM to generate knowledge based on the dialogue context. Then, we further prompt it to generate responses based on the dialogue context and the previously generated knowledge. Results show that our knowledge generator outperforms the state-of-the-art retrieval-based model by 5.8{\%} when combining knowledge relevance and correctness. In addition, our multi-stage prompting outperforms the finetuning-based dialogue model in terms of response knowledgeability and engagement by up to 10{\%} and 5{\%}, respectively. Furthermore, we scale our model up to 530 billion parameters and demonstrate that larger LMs improve the generation correctness score by up to 10{\%}, and response relevance, knowledgeability and engagement by up to 10{\%}. Our code is available at: \url{https://github.com/NVIDIA/Megatron-LM}.
null
null
10.18653/v1/2022.findings-acl.104
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,029
inproceedings
qi-etal-2022-dureadervis
$\textrm{DuReader}_{\textrm{vis}}$: A {C}hinese Dataset for Open-domain Document Visual Question Answering
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.105/
Qi, Le and Lv, Shangwen and Li, Hongyu and Liu, Jing and Zhang, Yu and She, Qiaoqiao and Wu, Hua and Wang, Haifeng and Liu, Ting
Findings of the Association for Computational Linguistics: ACL 2022
1338--1351
Open-domain question answering has been used in a wide range of applications, such as web search and enterprise search, which usually takes clean texts extracted from various formats of documents (e.g., web pages, PDFs, or Word documents) as the information source. However, designing different text extraction approaches is time-consuming and not scalable. In order to reduce human cost and improve the scalability of QA systems, we propose and study an $\textbf{Open-domain}$ $\textbf{Doc}$ument $\textbf{V}$isual $\textbf{Q}$uestion $\textbf{A}$nswering (Open-domain DocVQA) task, which requires answering questions based on a collection of document images directly instead of only document texts, utilizing layouts and visual features additionally. Towards this end, we introduce the first Chinese Open-domain DocVQA dataset called $\textrm{DuReader}_{\textrm{vis}}$, containing about 15K question-answering pairs and 158K document images from the Baidu search engine. There are three main challenges in $\textrm{DuReader}_{\textrm{vis}}$: (1) long document understanding, (2) noisy texts, and (3) multi-span answer extraction. The extensive experiments demonstrate that the dataset is challenging. Additionally, we propose a simple approach that incorporates the layout and visual features, and the experimental results show the effectiveness of the proposed approach. The dataset and code will be publicly available at \url{https://github.com/baidu/DuReader/tree/master/DuReader-vis}.
null
null
10.18653/v1/2022.findings-acl.105
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,030
inproceedings
mueller-etal-2022-coloring
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.106/
Mueller, Aaron and Frank, Robert and Linzen, Tal and Wang, Luheng and Schuster, Sebastian
Findings of the Association for Computational Linguistics: ACL 2022
1352--1368
Relations between words are governed by hierarchical structure rather than linear ordering. Sequence-to-sequence (seq2seq) models, despite their success in downstream NLP applications, often fail to generalize in a hierarchy-sensitive manner when performing syntactic transformations{---}for example, transforming declarative sentences into questions. However, syntactic evaluations of seq2seq models have only observed models that were not pre-trained on natural language data before being trained to perform syntactic transformations, in spite of the fact that pre-training has been found to induce hierarchical linguistic generalizations in language models; in other words, the syntactic capabilities of seq2seq models may have been greatly understated. We address this gap using the pre-trained seq2seq models T5 and BART, as well as their multilingual variants mT5 and mBART. We evaluate whether they generalize hierarchically on two transformations in two languages: question formation and passivization in English and German. We find that pre-trained seq2seq models generalize hierarchically when performing syntactic transformations, whereas models trained from scratch on syntactic transformations do not. This result presents evidence for the learnability of hierarchical syntactic information from non-annotated natural language text while also demonstrating that seq2seq models are capable of syntactic generalization, though only after exposure to much more language data than human learners receive.
null
null
10.18653/v1/2022.findings-acl.106
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,031
inproceedings
li-etal-2022-c3kg
{C}$^3${KG}: A {C}hinese Commonsense Conversation Knowledge Graph
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.107/
Li, Dawei and Li, Yanran and Zhang, Jiayi and Li, Ke and Wei, Chen and Cui, Jianwei and Wang, Bin
Findings of the Association for Computational Linguistics: ACL 2022
1369--1383
Existing commonsense knowledge bases often organize tuples in an isolated manner, which is deficient for commonsense conversational models to plan the next steps. To fill the gap, we curate a large-scale multi-turn human-written conversation corpus, and create the first Chinese commonsense conversation knowledge graph which incorporates both social commonsense knowledge and dialog flow information. To show the potential of our graph, we develop a graph-conversation matching approach, and benchmark two graph-grounded conversational tasks. All the resources in this work will be released to foster future research.
null
null
10.18653/v1/2022.findings-acl.107
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,032
inproceedings
imani-etal-2022-graph
Graph Neural Networks for Multiparallel Word Alignment
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.108/
Imani, Ayyoob and Senel, L{\"utfi Kerem and Jalili Sabet, Masoud and Yvon, Fran{\c{cois and Schuetze, Hinrich
Findings of the Association for Computational Linguistics: ACL 2022
1384--1396
After a period of decrease, interest in word alignments is increasing again for their usefulness in domains such as typological research, cross-lingual annotation projection and machine translation. Generally, alignment algorithms only use bitext and do not make use of the fact that many parallel corpora are multiparallel. Here, we compute high-quality word alignments between multiple language pairs by considering all language pairs together. First, we create a multiparallel word alignment graph, joining all bilingual word alignment pairs in one graph. Next, we use graph neural networks (GNNs) to exploit the graph structure. Our GNN approach (i) utilizes information about the meaning, position and language of the input words, (ii) incorporates information from multiple parallel sentences, (iii) adds and removes edges from the initial alignments, and (iv) yields a prediction model that can generalize beyond the training sentences. We show that community detection algorithms can provide valuable information for multiparallel word alignment. Our method outperforms previous work on three word alignment datasets and on a downstream task.
null
null
10.18653/v1/2022.findings-acl.108
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,033
inproceedings
wu-etal-2022-sentiment
Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with {ASR} Errors
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.109/
Wu, Yang and Zhao, Yanyan and Yang, Hao and Chen, Song and Qin, Bing and Cao, Xiaohuan and Zhao, Wenting
Findings of the Association for Computational Linguistics: ACL 2022
1397--1406
Multimodal sentiment analysis has attracted increasing attention and lots of models have been proposed. However, the performance of the state-of-the-art models decreases sharply when they are deployed in the real world. We find that the main reason is that real-world applications can only access the text outputs by the automatic speech recognition (ASR) models, which may be with errors because of the limitation of model capacity. Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment analysis models directly. To address this problem, we propose the sentiment word aware multimodal refinement model (SWRM), which can dynamically refine the erroneous sentiment words by leveraging multimodal sentiment clues. Specifically, we first use the sentiment word position detection module to obtain the most possible position of the sentiment word in the text and then utilize the multimodal sentiment word refinement module to dynamically refine the sentiment word embeddings. The refined embeddings are taken as the textual inputs of the multimodal feature fusion module to predict the sentiment labels. We conduct extensive experiments on the real-world datasets including MOSI-Speechbrain, MOSI-IBM, and MOSI-iFlytek and the results demonstrate the effectiveness of our model, which surpasses the current state-of-the-art models on three datasets. Furthermore, our approach can be adapted for other multimodal feature fusion models easily.
null
null
10.18653/v1/2022.findings-acl.109
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,034
inproceedings
wang-etal-2022-novel
A Novel Framework Based on Medical Concept Driven Attention for Explainable Medical Code Prediction via External Knowledge
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.110/
Wang, Tao and Zhang, Linhai and Ye, Chenchen and Liu, Junxi and Zhou, Deyu
Findings of the Association for Computational Linguistics: ACL 2022
1407--1416
Medical code prediction from clinical notes aims at automatically associating medical codes with the clinical notes. Rare code problem, the medical codes with low occurrences, is prominent in medical code prediction. Recent studies employ deep neural networks and the external knowledge to tackle it. However, such approaches lack interpretability which is a vital issue in medical application. Moreover, due to the lengthy and noisy clinical notes, such approaches fail to achieve satisfactory results. Therefore, in this paper, we propose a novel framework based on medical concept driven attention to incorporate external knowledge for explainable medical code prediction. In specific, both the clinical notes and Wikipedia documents are aligned into topic space to extract medical concepts using topic modeling. Then, the medical concept-driven attention mechanism is applied to uncover the medical code related concepts which provide explanations for medical code prediction. Experimental results on the benchmark dataset show the superiority of the proposed framework over several state-of-the-art baselines.
null
null
10.18653/v1/2022.findings-acl.110
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,035
inproceedings
fu-etal-2022-effective
Effective Unsupervised Constrained Text Generation based on Perturbed Masking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.111/
Fu, Yingwen and Ou, Wenjie and Yu, Zhou and Lin, Yue
Findings of the Association for Computational Linguistics: ACL 2022
1417--1427
Unsupervised constrained text generation aims to generate text under a given set of constraints without any supervised data. Current state-of-the-art methods stochastically sample edit positions and actions, which may cause unnecessary search steps. In this paper, we propose PMCTG to improve effectiveness by searching for the best edit position and action in each step. Specifically, PMCTG extends perturbed masking technique to effectively search for the most incongruent token to edit. Then it introduces four multi-aspect scoring functions to select edit action to further reduce search difficulty. Since PMCTG does not require supervised data, it could be applied to different generation tasks. We show that under the unsupervised setting, PMCTG achieves new state-of-the-art results in two representative tasks, namely keywords- to-sentence generation and paraphrasing.
null
null
10.18653/v1/2022.findings-acl.111
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,036
inproceedings
yang-tu-2022-combining
Combining (Second-Order) Graph-Based and Headed-Span-Based Projective Dependency Parsing
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.112/
Yang, Songlin and Tu, Kewei
Findings of the Association for Computational Linguistics: ACL 2022
1428--1434
Graph-based methods, which decompose the score of a dependency tree into scores of dependency arcs, are popular in dependency parsing for decades. Recently, (CITATION) propose a headed-span-based method that decomposes the score of a dependency tree into scores of headed spans. They show improvement over first-order graph-based methods. However, their method does not score dependency arcs at all, and dependency arcs are implicitly induced by their cubic-time algorithm, which is possibly sub-optimal since modeling dependency arcs is intuitively useful. In this work, we aim to combine graph-based and headed-span-based methods, incorporating both arc scores and headed span scores into our model. First, we show a direct way to combine with $O(n^4)$ parsing complexity. To decrease complexity, inspired by the classical head-splitting trick, we show two $O(n^3)$ dynamic programming algorithms to combine first- and second-order graph-based and headed-span-based methods. Our experiments on PTB, CTB, and UD show that combining first-order graph-based and headed-span-based methods is effective. We also confirm the effectiveness of second-order graph-based parsing in the deep learning age, however, we observe marginal or no improvement when combining second-order graph-based and headed-span-based methods .
null
null
10.18653/v1/2022.findings-acl.112
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,037
inproceedings
weller-etal-2022-end
End-to-End Speech Translation for Code Switched Speech
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.113/
Weller, Orion and Sperber, Matthias and Pires, Telmo and Setiawan, Hendra and Gollan, Christian and Telaar, Dominic and Paulik, Matthias
Findings of the Association for Computational Linguistics: ACL 2022
1435--1448
Code switching (CS) refers to the phenomenon of interchangeably using words and phrases from different languages. CS can pose significant accuracy challenges to NLP, due to the often monolingual nature of the underlying systems. In this work, we focus on CS in the context of English/Spanish conversations for the task of speech translation (ST), generating and evaluating both transcript and translation. To evaluate model performance on this task, we create a novel ST corpus derived from existing public data sets. We explore various ST architectures across two dimensions: cascaded (transcribe then translate) vs end-to-end (jointly transcribe and translate) and unidirectional (source -{\ensuremath{>}} target) vs bidirectional (source {\ensuremath{<}}-{\ensuremath{>}} target). We show that our ST architectures, and especially our bidirectional end-to-end architecture, perform well on CS speech, even when no CS training data is used.
null
null
10.18653/v1/2022.findings-acl.113
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,038
inproceedings
sun-etal-2022-transformational
A Transformational Biencoder with In-Domain Negative Sampling for Zero-Shot Entity Linking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.114/
Sun, Kai and Zhang, Richong and Mensah, Samuel and Mao, Yongyi and Liu, Xudong
Findings of the Association for Computational Linguistics: ACL 2022
1449--1458
Recent interest in entity linking has focused in the zero-shot scenario, where at test time the entity mention to be labelled is never seen during training, or may belong to a different domain from the source domain. Current work leverage pre-trained BERT with the implicit assumption that it bridges the gap between the source and target domain distributions. However, fine-tuned BERT has a considerable underperformance at zero-shot when applied in a different domain. We solve this problem by proposing a Transformational Biencoder that incorporates a transformation into BERT to perform a zero-shot transfer from the source domain during training. As like previous work, we rely on negative entities to encourage our model to discriminate the golden entities during training. To generate these negative entities, we propose a simple but effective strategy that takes the domain of the golden entity into perspective. Our experimental results on the benchmark dataset Zeshel show effectiveness of our approach and achieve new state-of-the-art.
null
null
10.18653/v1/2022.findings-acl.114
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,039
inproceedings
gong-etal-2022-finding
Finding the Dominant Winning Ticket in Pre-Trained Language Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.115/
Gong, Zhuocheng and He, Di and Shen, Yelong and Liu, Tie-Yan and Chen, Weizhu and Zhao, Dongyan and Wen, Ji-Rong and Yan, Rui
Findings of the Association for Computational Linguistics: ACL 2022
1459--1472
The Lottery Ticket Hypothesis suggests that for any over-parameterized model, a small subnetwork exists to achieve competitive performance compared to the backbone architecture. In this paper, we study whether there is a winning lottery ticket for pre-trained language models, which allow the practitioners to fine-tune the parameters in the ticket but achieve good downstream performance. To achieve this, we regularize the fine-tuning process with L1 distance and explore the subnetwork structure (what we refer to as the {\textquotedblleft}dominant winning ticket{\textquotedblright}). Empirically, we show that (a) the dominant winning ticket can achieve performance that is comparable with that of the full-parameter model, (b) the dominant winning ticket is transferable across different tasks, (c) and the dominant winning ticket has a natural structure within each parameter matrix. Strikingly, we find that a dominant winning ticket that takes up 0.05{\%} of the parameters can already achieve satisfactory performance, indicating that the PLM is significantly reducible during fine-tuning.
null
null
10.18653/v1/2022.findings-acl.115
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,040
inproceedings
buaphet-etal-2022-thai
{T}hai Nested Named Entity Recognition Corpus
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.116/
Buaphet, Weerayut and Udomcharoenchaikit, Can and Limkonchotiwat, Peerat and Rutherford, Attapol and Nutanong, Sarana
Findings of the Association for Computational Linguistics: ACL 2022
1473--1486
This paper presents the first Thai Nested Named Entity Recognition (N-NER) dataset. Thai N-NER consists of 264,798 mentions, 104 classes, and a maximum depth of 8 layers obtained from 4,894 documents in the domains of news articles and restaurant reviews. Our work, to the best of our knowledge, presents the largest non-English N-NER dataset and the first non-English one with fine-grained classes. To understand the new challenges our proposed dataset brings to the field, we conduct an experimental study on (i) cutting edge N-NER models with the state-of-the-art accuracy in English and (ii) baseline methods based on well-known language model architectures. From the experimental results, we obtained two key findings. First, all models produced poor F1 scores in the tail region of the class distribution. There is little or no performance improvement provided by these models with respect to the baseline methods with our Thai dataset. These findings suggest that further investigation is required to make a multilingual N-NER solution that works well across different languages.
null
null
10.18653/v1/2022.findings-acl.116
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,041
inproceedings
seonwoo-etal-2022-two
Two-Step Question Retrieval for Open-Domain {QA}
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.117/
Seonwoo, Yeon and Son, Juhee and Jin, Jiho and Lee, Sang-Woo and Kim, Ji-Hoon and Ha, Jung-Woo and Oh, Alice
Findings of the Association for Computational Linguistics: ACL 2022
1487--1492
The retriever-reader pipeline has shown promising performance in open-domain QA but suffers from a very slow inference speed. Recently proposed question retrieval models tackle this problem by indexing question-answer pairs and searching for similar questions. These models have shown a significant increase in inference speed, but at the cost of lower QA performance compared to the retriever-reader models. This paper proposes a two-step question retrieval model, SQuID (Sequential Question-Indexed Dense retrieval) and distant supervision for training. SQuID uses two bi-encoders for question retrieval. The first-step retriever selects top-k similar questions, and the second-step retriever finds the most similar question from the top-k questions. We evaluate the performance and the computational efficiency of SQuID. The results show that SQuID significantly increases the performance of existing question retrieval models with a negligible loss on inference speed.
null
null
10.18653/v1/2022.findings-acl.117
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,042
inproceedings
gokhale-etal-2022-semantically
Semantically Distributed Robust Optimization for Vision-and-Language Inference
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.118/
Gokhale, Tejas and Chaudhary, Abhishek and Banerjee, Pratyay and Baral, Chitta and Yang, Yezhou
Findings of the Association for Computational Linguistics: ACL 2022
1493--1513
Analysis of vision-and-language models has revealed their brittleness under linguistic phenomena such as paraphrasing, negation, textual entailment, and word substitutions with synonyms or antonyms. While data augmentation techniques have been designed to mitigate against these failure modes, methods that can integrate this knowledge into the training pipeline remain under-explored. In this paper, we present \textbf{SDRO}, a model-agnostic method that utilizes a set linguistic transformations in a distributed robust optimization setting, along with an ensembling technique to leverage these transformations during inference.Experiments on benchmark datasets with images (NLVR$^2$) and video (VIOLIN) demonstrate performance improvements as well as robustness to adversarial attacks.Experiments on binary VQA explore the generalizability of this method to other V{\&}L tasks.
null
null
10.18653/v1/2022.findings-acl.118
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,043
inproceedings
jung-etal-2022-learning
Learning from Missing Relations: Contrastive Learning with Commonsense Knowledge Graphs for Commonsense Inference
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.119/
Jung, Yong-Ho and Park, Jun-Hyung and Choi, Joon-Young and Lee, Mingyu and Kim, Junho and Kim, Kang-Min and Lee, SangKeun
Findings of the Association for Computational Linguistics: ACL 2022
1514--1523
Commonsense inference poses a unique challenge to reason and generate the physical, social, and causal conditions of a given event. Existing approaches to commonsense inference utilize commonsense transformers, which are large-scale language models that learn commonsense knowledge graphs. However, they suffer from a lack of coverage and expressive diversity of the graphs, resulting in a degradation of the representation quality. In this paper, we focus on addressing missing relations in commonsense knowledge graphs, and propose a novel contrastive learning framework called SOLAR. Our framework contrasts sets of semantically similar and dissimilar events, learning richer inferential knowledge compared to existing approaches. Empirical results demonstrate the efficacy of SOLAR in commonsense inference of diverse commonsense knowledge graphs. Specifically, SOLAR outperforms the state-of-the-art commonsense transformer on commonsense inference with ConceptNet by 1.84{\%} on average among 8 automatic evaluation metrics. In-depth analysis of SOLAR sheds light on the effects of the missing relations utilized in learning commonsense knowledge graphs.
null
null
10.18653/v1/2022.findings-acl.119
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,044
inproceedings
wang-etal-2022-capture
Capture Human Disagreement Distributions by Calibrated Networks for Natural Language Inference
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.120/
Wang, Yuxia and Wang, Minghan and Chen, Yimeng and Tao, Shimin and Guo, Jiaxin and Su, Chang and Zhang, Min and Yang, Hao
Findings of the Association for Computational Linguistics: ACL 2022
1524--1535
Natural Language Inference (NLI) datasets contain examples with highly ambiguous labels due to its subjectivity. Several recent efforts have been made to acknowledge and embrace the existence of ambiguity, and explore how to capture the human disagreement distribution. In contrast with directly learning from gold ambiguity labels, relying on special resource, we argue that the model has naturally captured the human ambiguity distribution as long as it`s calibrated, i.e. the predictive probability can reflect the true correctness likelihood. Our experiments show that when model is well-calibrated, either by label smoothing or temperature scaling, it can obtain competitive performance as prior work, on both divergence scores between predictive probability and the true human opinion distribution, and the accuracy. This reveals the overhead of collecting gold ambiguity labels can be cut, by broadly solving how to calibrate the NLI network.
null
null
10.18653/v1/2022.findings-acl.120
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,045
inproceedings
andersen-maalej-2022-efficient
Efficient, Uncertainty-based Moderation of Neural Networks Text Classifiers
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.121/
Andersen, Jakob Smedegaard and Maalej, Walid
Findings of the Association for Computational Linguistics: ACL 2022
1536--1546
To maximize the accuracy and increase the overall acceptance of text classifiers, we propose a framework for the efficient, in-operation moderation of classifiers' output. Our framework focuses on use cases in which F1-scores of modern Neural Networks classifiers (ca. 90{\%}) are still inapplicable in practice. We suggest a semi-automated approach that uses prediction uncertainties to pass unconfident, probably incorrect classifications to human moderators. To minimize the workload, we limit the human moderated data to the point where the accuracy gains saturate and further human effort does not lead to substantial improvements. A series of benchmarking experiments based on three different datasets and three state-of-the-art classifiers show that our framework can improve the classification F1-scores by 5.1 to 11.2{\%} (up to approx. 98 to 99{\%}), while reducing the moderation load up to 73.3{\%} compared to a random moderation.
null
null
10.18653/v1/2022.findings-acl.121
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,046
inproceedings
akter-etal-2022-revisiting
Revisiting Automatic Evaluation of Extractive Summarization Task: Can We Do Better than {ROUGE}?
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.122/
Akter, Mousumi and Bansal, Naman and Karmaker, Shubhra Kanti
Findings of the Association for Computational Linguistics: ACL 2022
1547--1560
It has been the norm for a long time to evaluate automated summarization tasks using the popular ROUGE metric. Although several studies in the past have highlighted the limitations of ROUGE, researchers have struggled to reach a consensus on a better alternative until today. One major limitation of the traditional ROUGE metric is the lack of semantic understanding (relies on direct overlap of n-grams). In this paper, we exclusively focus on the extractive summarization task and propose a semantic-aware nCG (normalized cumulative gain)-based evaluation metric (called Sem-nCG) for evaluating this task. One fundamental contribution of the paper is that it demonstrates how we can generate more reliable semantic-aware ground truths for evaluating extractive summarization tasks without any additional human intervention. To the best of our knowledge, this work is the first of its kind. We have conducted extensive experiments with this new metric using the widely used CNN/DailyMail dataset. Experimental results show that the new Sem-nCG metric is indeed semantic-aware, shows higher correlation with human judgement (more reliable) and yields a large number of disagreements with the original ROUGE metric (suggesting that ROUGE often leads to inaccurate conclusions also verified by humans).
null
null
10.18653/v1/2022.findings-acl.122
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,047
inproceedings
simig-etal-2022-open
Open Vocabulary Extreme Classification Using Generative Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.123/
Simig, Daniel and Petroni, Fabio and Yanki, Pouya and Popat, Kashyap and Du, Christina and Riedel, Sebastian and Yazdani, Majid
Findings of the Association for Computational Linguistics: ACL 2022
1561--1583
The extreme multi-label classification (XMC) task aims at tagging content with a subset of labels from an extremely large label set. The label vocabulary is typically defined in advance by domain experts and assumed to capture all necessary tags. However in real world scenarios this label set, although large, is often incomplete and experts frequently need to refine it. To develop systems that simplify this process, we introduce the task of open vocabulary XMC (OXMC): given a piece of content, predict a set of labels, some of which may be outside of the known tag set. Hence, in addition to not having training data for some labels{--}as is the case in zero-shot classification{--}models need to invent some labels on-thefly. We propose GROOV, a fine-tuned seq2seq model for OXMC that generates the set of labels as a flat sequence and is trained using a novel loss independent of predicted label order. We show the efficacy of the approach, experimenting with popular XMC datasets for which GROOV is able to predict meaningful labels outside the given vocabulary while performing on par with state-of-the-art solutions for known labels.
null
null
10.18653/v1/2022.findings-acl.123
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,048
inproceedings
ma-etal-2022-decomposed
Decomposed Meta-Learning for Few-Shot Named Entity Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.124/
Ma, Tingting and Jiang, Huiqiang and Wu, Qianhui and Zhao, Tiejun and Lin, Chin-Yew
Findings of the Association for Computational Linguistics: ACL 2022
1584--1596
Few-shot named entity recognition (NER) systems aim at recognizing novel-class named entities based on only a few labeled examples. In this paper, we present a decomposed meta-learning approach which addresses the problem of few-shot NER by sequentially tackling few-shot span detection and few-shot entity typing using meta-learning. In particular, we take the few-shot span detection as a sequence labeling problem and train the span detector by introducing the model-agnostic meta-learning (MAML) algorithm to find a good model parameter initialization that could fast adapt to new entity classes. For few-shot entity typing, we propose MAML-ProtoNet, i.e., MAML-enhanced prototypical networks to find a good embedding space that can better distinguish text span representations from different entity classes. Extensive experiments on various benchmarks show that our approach achieves superior performance over prior methods.
null
null
10.18653/v1/2022.findings-acl.124
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,049
inproceedings
tan-etal-2022-tegtok
{T}eg{T}ok: Augmenting Text Generation via Task-specific and Open-world Knowledge
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.125/
Tan, Chao-Hong and Gu, Jia-Chen and Tao, Chongyang and Ling, Zhen-Hua and Xu, Can and Hu, Huang and Geng, Xiubo and Jiang, Daxin
Findings of the Association for Computational Linguistics: ACL 2022
1597--1609
Generating natural and informative texts has been a long-standing problem in NLP. Much effort has been dedicated into incorporating pre-trained language models (PLMs) with various open-world knowledge, such as knowledge graphs or wiki pages. However, their ability to access and manipulate the task-specific knowledge is still limited on downstream tasks, as this type of knowledge is usually not well covered in PLMs and is hard to acquire. To address the problem, we propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework. Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively on the basis of PLMs. With the help of these two types of knowledge, our model can learn what and how to generate. Experiments on two text generation tasks of dialogue generation and question generation, and on two datasets show that our method achieves better performance than various baseline models.
null
null
10.18653/v1/2022.findings-acl.125
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,050
inproceedings
li-etal-2022-emocaps
{E}mo{C}aps: Emotion Capsule based Model for Conversational Emotion Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.126/
Li, Zaijing and Tang, Fengxiao and Zhao, Ming and Zhu, Yusen
Findings of the Association for Computational Linguistics: ACL 2022
1610--1618
Emotion recognition in conversation (ERC) aims to analyze the speaker`s state and identify their emotion in the conversation. Recent works in ERC focus on context modeling but ignore the representation of contextual emotional tendency. In order to extract multi-modal information and the emotional tendency of the utterance effectively, we propose a new structure named Emoformer to extract multi-modal emotion vectors from different modalities and fuse them with sentence vector to be an emotion capsule. Furthermore, we design an end-to-end ERC model called EmoCaps, which extracts emotion vectors through the Emoformer structure and obtain the emotion classification results from a context analysis model. Through the experiments with two benchmark datasets, our model shows better performance than the existing state-of-the-art models.
null
null
10.18653/v1/2022.findings-acl.126
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,051
inproceedings
wang-etal-2022-logic
Logic-Driven Context Extension and Data Augmentation for Logical Reasoning of Text
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.127/
Wang, Siyuan and Zhong, Wanjun and Tang, Duyu and Wei, Zhongyu and Fan, Zhihao and Jiang, Daxin and Zhou, Ming and Duan, Nan
Findings of the Association for Computational Linguistics: ACL 2022
1619--1629
Logical reasoning of text requires identifying critical logical structures in the text and performing inference over them. Existing methods for logical reasoning mainly focus on contextual semantics of text while struggling to explicitly model the logical inference process. In this paper, we not only put forward a logic-driven context extension framework but also propose a logic-driven data augmentation algorithm. The former follows a three-step reasoning paradigm, and each step is respectively to extract logical expressions as elementary reasoning units, symbolically infer the implicit expressions following equivalence laws and extend the context to validate the options. The latter augments literally similar but logically different instances and incorporates contrastive learning to better capture logical information, especially logical negative and conditional relationships. We conduct experiments on two benchmark datasets, ReClor and LogiQA. The results show that our method achieves state-of-the-art performance on both datasets, and even surpasses human performance on the ReClor dataset.
null
null
10.18653/v1/2022.findings-acl.127
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,052
inproceedings
pouran-ben-veyseh-etal-2022-transfer
Transfer Learning and Prediction Consistency for Detecting Offensive Spans of Text
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.128/
Pouran Ben Veyseh, Amir and Xu, Ning and Tran, Quan and Manjunatha, Varun and Dernoncourt, Franck and Nguyen, Thien
Findings of the Association for Computational Linguistics: ACL 2022
1630--1637
Toxic span detection is the task of recognizing offensive spans in a text snippet. Although there has been prior work on classifying text snippets as offensive or not, the task of recognizing spans responsible for the toxicity of a text is not explored yet. In this work, we introduce a novel multi-task framework for toxic span detection in which the model seeks to simultaneously predict offensive words and opinion phrases to leverage their inter-dependencies and improve the performance. Moreover, we introduce a novel regularization mechanism to encourage the consistency of the model predictions across similar inputs for toxic span detection. Our extensive experiments demonstrate the effectiveness of the proposed model compared to strong baselines.
null
null
10.18653/v1/2022.findings-acl.128
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,053
inproceedings
chen-etal-2022-learning
Learning Reasoning Patterns for Relational Triple Extraction with Mutual Generation of Text and Graph
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.129/
Chen, Yubo and Zhang, Yunqi and Huang, Yongfeng
Findings of the Association for Computational Linguistics: ACL 2022
1638--1647
Relational triple extraction is a critical task for constructing knowledge graphs. Existing methods focused on learning text patterns from explicit relational mentions. However, they usually suffered from ignoring relational reasoning patterns, thus failed to extract the implicitly implied triples. Fortunately, the graph structure of a sentence`s relational triples can help find multi-hop reasoning paths. Moreover, the type inference logic through the paths can be captured with the sentence`s supplementary relational expressions that represent the real-world conceptual meanings of the paths' composite relations. In this paper, we propose a unified framework to learn the relational reasoning patterns for this task. To identify multi-hop reasoning paths, we construct a relational graph from the sentence (text-to-graph generation) and apply multi-layer graph convolutions to it. To capture the relation type inference logic of the paths, we propose to understand the unlabeled conceptual expressions by reconstructing the sentence from the relational graph (graph-to-text generation) in a self-supervised manner. Experimental results on several benchmark datasets demonstrate the effectiveness of our method.
null
null
10.18653/v1/2022.findings-acl.129
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,054
inproceedings
pouran-ben-veyseh-etal-2022-document
Document-Level Event Argument Extraction via Optimal Transport
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.130/
Pouran Ben Veyseh, Amir and Nguyen, Minh Van and Dernoncourt, Franck and Min, Bonan and Nguyen, Thien
Findings of the Association for Computational Linguistics: ACL 2022
1648--1658
Event Argument Extraction (EAE) is one of the sub-tasks of event extraction, aiming to recognize the role of each entity mention toward a specific event trigger. Despite the success of prior works in sentence-level EAE, the document-level setting is less explored. In particular, whereas syntactic structures of sentences have been shown to be effective for sentence-level EAE, prior document-level EAE models totally ignore syntactic structures for documents. Hence, in this work, we study the importance of syntactic structures in document-level EAE. Specifically, we propose to employ Optimal Transport (OT) to induce structures of documents based on sentence-level syntactic structures and tailored to EAE task. Furthermore, we propose a novel regularization technique to explicitly constrain the contributions of unrelated context words in the final prediction for EAE. We perform extensive experiments on the benchmark document-level EAE dataset RAMS that leads to the state-of-the-art performance. Moreover, our experiments on the ACE 2005 dataset reveals the effectiveness of the proposed model in the sentence-level EAE by establishing new state-of-the-art results.
null
null
10.18653/v1/2022.findings-acl.130
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,055
inproceedings
aksu-etal-2022-n
N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.131/
Aksu, Taha and Liu, Zhengyuan and Kan, Min-Yen and Chen, Nancy
Findings of the Association for Computational Linguistics: ACL 2022
1659--1671
Augmentation of task-oriented dialogues has followed standard methods used for plain-text such as back-translation, word-level manipulation, and paraphrasing despite its richly annotated structure. In this work, we introduce an augmentation framework that utilizes belief state annotations to match turns from various dialogues and form new synthetic dialogues in a bottom-up manner. Unlike other augmentation strategies, it operates with as few as five examples. Our augmentation strategy yields significant improvements when both adapting a DST model to a new domain, and when adapting a language model to the DST task, on evaluations with TRADE and TOD-BERT models. Further analysis shows that our model performs better on seen values during training, and it is also more robust to unseen values. We conclude that exploiting belief state annotations enhances dialogue augmentation and results in improved models in n-shot training scenarios.
null
null
10.18653/v1/2022.findings-acl.131
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,056
inproceedings
tan-etal-2022-document
Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.132/
Tan, Qingyu and He, Ruidan and Bing, Lidong and Ng, Hwee Tou
Findings of the Association for Computational Linguistics: ACL 2022
1672--1681
Document-level Relation Extraction (DocRE) is a more challenging task compared to its sentence-level counterpart. It aims to extract relations from multiple sentences at once. In this paper, we propose a semi-supervised framework for DocRE with three novel components. Firstly, we use an axial attention module for learning the interdependency among entity-pairs, which improves the performance on two-hop relations. Secondly, we propose an adaptive focal loss to tackle the class imbalance problem of DocRE. Lastly, we use knowledge distillation to overcome the differences between human annotated data and distantly supervised data. We conducted experiments on two DocRE datasets. Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1.36 F1 and 1.46 Ign{\_}F1 score on the DocRED leaderboard.
null
null
10.18653/v1/2022.findings-acl.132
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,057
inproceedings
dhuliawala-etal-2022-calibration
Calibration of Machine Reading Systems at Scale
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.133/
Dhuliawala, Shehzaad and Adolphs, Leonard and Das, Rajarshi and Sachan, Mrinmaya
Findings of the Association for Computational Linguistics: ACL 2022
1682--1693
In typical machine learning systems, an estimate of the probability of the prediction is used to assess the system`s confidence in the prediction. This confidence measure is usually uncalibrated; i.e. the system`s confidence in the prediction does not match the true probability of the predicted output. In this paper, we present an investigation into calibrating open setting machine reading systemssuch as open-domain question answering and claim verification systems. We show that calibrating such complex systems which contain discrete retrieval and deep reading components is challenging and current calibration techniques fail to scale to these settings. We propose simple extensions to existing calibration approaches that allows us to adapt them to these settings. Our experimental results reveal that the approach works well, and can be useful to selectively predict answers when question answering systems are posed with unanswerable or out-of-the-training distribution questions.
null
null
10.18653/v1/2022.findings-acl.133
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,058
inproceedings
xu-etal-2022-towards
Towards Adversarially Robust Text Classifiers by Learning to Reweight Clean Examples
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.134/
Xu, Jianhan and Zhang, Cenyuan and Zheng, Xiaoqing and Li, Linyang and Hsieh, Cho-Jui and Chang, Kai-Wei and Huang, Xuanjing
Findings of the Association for Computational Linguistics: ACL 2022
1694--1707
Most of the existing defense methods improve the adversarial robustness by making the models adapt to the training set augmented with some adversarial examples. However, the augmented adversarial examples may not be natural, which might distort the training distribution, resulting in inferior performance both in clean accuracy and adversarial robustness. In this study, we explore the feasibility of introducing a reweighting mechanism to calibrate the training distribution to obtain robust models. We propose to train text classifiers by a sample reweighting method in which the example weights are learned to minimize the loss of a validation set mixed with the clean examples and their adversarial ones in an online learning manner. Through extensive experiments, we show that there exists a reweighting mechanism to make the models more robust against adversarial attacks without the need to craft the adversarial examples for the entire training set.
null
null
10.18653/v1/2022.findings-acl.134
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,059
inproceedings
inoue-etal-2022-morphosyntactic
Morphosyntactic Tagging with Pre-trained Language Models for {A}rabic and its Dialects
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.135/
Inoue, Go and Khalifa, Salam and Habash, Nizar
Findings of the Association for Computational Linguistics: ACL 2022
1708--1719
We present state-of-the-art results on morphosyntactic tagging across different varieties of Arabic using fine-tuned pre-trained transformer language models. Our models consistently outperform existing systems in Modern Standard Arabic and all the Arabic dialects we study, achieving 2.6{\%} absolute improvement over the previous state-of-the-art in Modern Standard Arabic, 2.8{\%} in Gulf, 1.6{\%} in Egyptian, and 8.3{\%} in Levantine. We explore different training setups for fine-tuning pre-trained transformer language models, including training data size, the use of external linguistic resources, and the use of annotated data from other dialects in a low-resource scenario. Our results show that strategic fine-tuning using datasets from other high-resource dialects is beneficial for a low-resource dialect. Additionally, we show that high-quality morphological analyzers as external linguistic resources are beneficial especially in low-resource settings.
null
null
10.18653/v1/2022.findings-acl.135
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,060
inproceedings
li-etal-2022-pre
How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.136/
Li, Shaobo and Li, Xiaoguang and Shang, Lifeng and Dong, Zhenhua and Sun, Chengjie and Liu, Bingquan and Ji, Zhenzhou and Jiang, Xin and Liu, Qun
Findings of the Association for Computational Linguistics: ACL 2022
1720--1732
Recently, there has been a trend to investigate the factual knowledge captured by Pre-trained Language Models (PLMs). Many works show the PLMs' ability to fill in the missing factual words in cloze-style prompts such as {\textquotedblright}Dante was born in [MASK].{\textquotedblright} However, it is still a mystery how PLMs generate the results correctly: relying on effective clues or shortcut patterns? We try to answer this question by a causal-inspired analysis that quantitatively measures and evaluates the word-level patterns that PLMs depend on to generate the missing words. We check the words that have three typical associations with the missing words: knowledge-dependent, positionally close, and highly co-occurred. Our analysis shows: (1) PLMs generate the missing factual words more by the positionally close and highly co-occurred words than the knowledge-dependent words; (2) the dependence on the knowledge-dependent words is more effective than the positionally close and highly co-occurred words. Accordingly, we conclude that the PLMs capture the factual knowledge ineffectively because of depending on the inadequate associations.
null
null
10.18653/v1/2022.findings-acl.136
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,061
inproceedings
arora-etal-2022-metadata
Metadata Shaping: A Simple Approach for Knowledge-Enhanced Language Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.137/
Arora, Simran and Wu, Sen and Liu, Enci and Re, Christopher
Findings of the Association for Computational Linguistics: ACL 2022
1733--1745
Popular language models (LMs) struggle to capture knowledge about rare tail facts and entities. Since widely used systems such as search and personal-assistants must support the long tail of entities that users ask about, there has been significant effort towards enhancing these base LMs with factual knowledge. We observe proposed methods typically start with a base LM and data that has been annotated with entity metadata, then change the model, by modifying the architecture or introducing auxiliary loss terms to better capture entity knowledge. In this work, we question this typical process and ask to what extent can we match the quality of model modifications, with a simple alternative: using a base LM and only changing the data. We propose metadata shaping, a method which inserts substrings corresponding to the readily available entity metadata, e.g. types and descriptions, into examples at train and inference time based on mutual information. Despite its simplicity, metadata shaping is quite effective. On standard evaluation benchmarks for knowledge-enhanced LMs, the method exceeds the base-LM baseline by an average of 4.3 F1 points and achieves state-of-the-art results. We further show the gains are on average 4.4x larger for the slice of examples containing tail vs. popular entities.
null
null
10.18653/v1/2022.findings-acl.137
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,062
inproceedings
cui-chen-2022-enhancing
Enhancing Natural Language Representation with Large-Scale Out-of-Domain Commonsense
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.138/
Cui, Wanyun and Chen, Xingran
Findings of the Association for Computational Linguistics: ACL 2022
1746--1756
We study how to enhance text representation via textual commonsense. We point out that commonsense has the nature of domain discrepancy. Namely, commonsense has different data formats and is domain-independent from the downstream task. This nature brings challenges to introducing commonsense in general text understanding tasks. A typical method of introducing textual knowledge is continuing pre-training over the commonsense corpus. However, it will cause catastrophic forgetting to the downstream task due to the domain discrepancy. In addition, previous methods of directly using textual descriptions as extra input information cannot apply to large-scale commonsense. In this paper, we propose to use large-scale out-of-domain commonsense to enhance text representation. In order to effectively incorporate the commonsense, we proposed OK-Transformer (Out-of-domain Knowledge enhanced Transformer). OK-Transformer effectively integrates commonsense descriptions and enhances them to the target text representation. In addition, OK-Transformer can adapt to the Transformer-based language models (e.g. BERT, RoBERTa) for free, without pre-training on large-scale unsupervised corpora. We have verified the effectiveness of OK-Transformer in multiple applications such as commonsense reasoning, general text classification, and low-resource commonsense settings.
null
null
10.18653/v1/2022.findings-acl.138
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,063
inproceedings
he-etal-2022-weighted
Weighted self Distillation for {C}hinese word segmentation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.139/
He, Rian and Cai, Shubin and Ming, Zhong and Zhang, Jialei
Findings of the Association for Computational Linguistics: ACL 2022
1757--1770
Recent researches show that multi-criteria resources and n-gram features are beneficial to Chinese Word Segmentation (CWS). However, these methods rely heavily on such additional information mentioned above and focus less on the model itself. We thus propose a novel neural framework, named Weighted self Distillation for Chinese word segmentation (WeiDC). The framework, which only requires unigram features, adopts self-distillation technology with four hand-crafted weight modules and two teacher models configurations. Experiment results show that WeiDC can make use of character features to learn contextual knowledge and successfully achieve state-of-the-art or competitive performance in terms of strictly closed test settings on SIGHAN Bakeoff benchmark datasets. Moreover, further experiments and analyses also demonstrate the robustness of WeiDC. Source codes of this paper are available on Github.
null
null
10.18653/v1/2022.findings-acl.139
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,064