entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
fierro-sogaard-2022-factual
Factual Consistency of Multilingual Pretrained Language Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.240/
Fierro, Constanza and S{\o}gaard, Anders
Findings of the Association for Computational Linguistics: ACL 2022
3046--3052
Pretrained language models can be queried for factual knowledge, with potential applications in knowledge base acquisition and tasks that require inference. However, for that, we need to know how reliable this knowledge is, and recent work has shown that monolingual English language models lack consistency when predicting factual knowledge, that is, they fill-in-the-blank differently for paraphrases describing the same fact. In this paper, we extend the analysis of consistency to a multilingual setting. We introduce a resource, mParaRel, and investigate (i) whether multilingual language models such as mBERT and XLM-R are more consistent than their monolingual counterparts;and (ii) if such models are equally consistent across languages. We find that mBERT is as inconsistent as English BERT in English paraphrases, but that both mBERT and XLM-R exhibit a high degree of inconsistency in English and even more so for all the other 45 languages.
null
null
10.18653/v1/2022.findings-acl.240
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,165
inproceedings
zhang-etal-2022-selecting
Selecting Stickers in Open-Domain Dialogue through Multitask Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.241/
Zhang, Zhexin and Zhu, Yeshuang and Fei, Zhengcong and Zhang, Jinchao and Zhou, Jie
Findings of the Association for Computational Linguistics: ACL 2022
3053--3060
With the increasing popularity of online chatting, stickers are becoming important in our online communication. Selecting appropriate stickers in open-domain dialogue requires a comprehensive understanding of both dialogues and stickers, as well as the relationship between the two types of modalities. To tackle these challenges, we propose a multitask learning method comprised of three auxiliary tasks to enhance the understanding of dialogue history, emotion and semantic meaning of stickers. Extensive experiments conducted on a recent challenging dataset show that our model can better combine the multimodal information and achieve significantly higher accuracy over strong baselines. Ablation study further verifies the effectiveness of each auxiliary task. Our code is available at \url{https://github.com/nonstopfor/Sticker-Selection}.
null
null
10.18653/v1/2022.findings-acl.241
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,166
inproceedings
chi-etal-2022-zinet
{Z}i{N}et: {L}inking {C}hinese Characters Spanning Three Thousand Years
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.242/
Chi, Yang and Giunchiglia, Fausto and Shi, Daqian and Diao, Xiaolei and Li, Chuntao and Xu, Hao
Findings of the Association for Computational Linguistics: ACL 2022
3061--3070
Modern Chinese characters evolved from 3,000 years ago. Up to now, tens of thousands of glyphs of ancient characters have been discovered, which must be deciphered by experts to interpret unearthed documents. Experts usually need to compare each ancient character to be examined with similar known ones in whole historical periods. However, it is inevitably limited by human memory and experience, which often cost a lot of time but associations are limited to a small scope. To help researchers discover glyph similar characters, this paper introduces ZiNet, the first diachronic knowledge base describing relationships and evolution of Chinese characters and words. In addition, powered by the knowledge of radical systems in ZiNet, this paper introduces glyph similarity measurement between ancient Chinese characters, which could capture similar glyph pairs that are potentially related in origins or semantics. Results show strong positive correlations between scores from the method and from human experts. Finally, qualitative analysis and implicit future applications are presented.
null
null
10.18653/v1/2022.findings-acl.242
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,167
inproceedings
jin-etal-2022-cross
How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.243/
Jin, Hailong and Dong, Tiansi and Hou, Lei and Li, Juanzi and Chen, Hui and Dai, Zelin and Yincen, Qu
Findings of the Association for Computational Linguistics: ACL 2022
3071--3081
Cross-lingual Entity Typing (CLET) aims at improving the quality of entity type prediction by transferring semantic knowledge learned from rich-resourced languages to low-resourced languages. In this paper, by utilizing multilingual transfer learning via the mixture-of-experts approach, our model dynamically capture the relationship between target language and each source language, and effectively generalize to predict types of unseen entities in new languages. Extensive experiments on multi-lingual datasets show that our method significantly outperforms multiple baselines and can robustly handle negative transfer. We questioned the relationship between language similarity and the performance of CLET. A series of experiments refute the commonsense that the more source the better, and suggest the Similarity Hypothesis for CLET.
null
null
10.18653/v1/2022.findings-acl.243
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,168
inproceedings
shou-etal-2022-amr
{AMR-DA}: {D}ata Augmentation by {A}bstract {M}eaning {R}epresentation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.244/
Shou, Ziyi and Jiang, Yuxin and Lin, Fangzhen
Findings of the Association for Computational Linguistics: ACL 2022
3082--3098
Abstract Meaning Representation (AMR) is a semantic representation for NLP/NLU. In this paper, we propose to use it for data augmentation in NLP. Our proposed data augmentation technique, called AMR-DA, converts a sample sentence to an AMR graph, modifies the graph according to various data augmentation policies, and then generates augmentations from graphs. Our method combines both sentence-level techniques like back translation and token-level techniques like EDA (Easy Data Augmentation). To evaluate the effectiveness of our method, we apply it to the tasks of semantic textual similarity (STS) and text classification. For STS, our experiments show that AMR-DA boosts the performance of the state-of-the-art models on several STS benchmarks. For text classification, AMR-DA outperforms EDA and AEDA and leads to more robust improvements.
null
null
10.18653/v1/2022.findings-acl.244
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,169
inproceedings
tekiroglu-etal-2022-using
Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.245/
Tekiro{\u{g}}lu, Serra Sinem and Bonaldi, Helena and Fanton, Margherita and Guerini, Marco
Findings of the Association for Computational Linguistics: ACL 2022
3099--3114
In this work, we present an extensive study on the use of pre-trained language models for the task of automatic Counter Narrative (CN) generation to fight online hate speech in English. We first present a comparative study to determine whether there is a particular Language Model (or class of LMs) and a particular decoding mechanism that are the most appropriate to generate CNs. Findings show that autoregressive models combined with stochastic decodings are the most promising. We then investigate how an LM performs in generating a CN with regard to an unseen target of hate. We find out that a key element for successful {\textquoteleft}out of target' experiments is not an overall similarity with the training data but the presence of a specific subset of training data, i. e. a target that shares some commonalities with the test target that can be defined a-priori. We finally introduce the idea of a pipeline based on the addition of an automatic post-editing step to refine generated CNs.
null
null
10.18653/v1/2022.findings-acl.245
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,170
inproceedings
zhu-etal-2022-improving
Improving Robustness of Language Models from a Geometry-aware Perspective
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.246/
Zhu, Bin and Gu, Zhaoquan and Wang, Le and Chen, Jinyin and Xuan, Qi
Findings of the Association for Computational Linguistics: ACL 2022
3115--3125
Recent studies have found that removing the norm-bounded projection and increasing search steps in adversarial training can significantly improve robustness. However, we observe that a too large number of search steps can hurt accuracy. We aim to obtain strong robustness efficiently using fewer steps. Through a toy experiment, we find that perturbing the clean data to the decision boundary but not crossing it does not degrade the test accuracy. Inspired by this, we propose friendly adversarial data augmentation (FADA) to generate friendly adversarial data. On top of FADA, we propose geometry-aware adversarial training (GAT) to perform adversarial training on friendly adversarial data so that we can save a large number of search steps. Comprehensive experiments across two widely used datasets and three pre-trained language models demonstrate that GAT can obtain stronger robustness via fewer steps. In addition, we provide extensive empirical results and in-depth analyses on robustness to facilitate future studies.
null
null
10.18653/v1/2022.findings-acl.246
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,171
inproceedings
zeng-etal-2022-task
Task-guided Disentangled Tuning for Pretrained Language Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.247/
Zeng, Jiali and Jiang, Yufan and Wu, Shuangzhi and Yin, Yongjing and Li, Mu
Findings of the Association for Computational Linguistics: ACL 2022
3126--3137
Pretrained language models (PLMs) trained on large-scale unlabeled corpus are typically fine-tuned on task-specific downstream datasets, which have produced state-of-the-art results on various NLP tasks. However, the data discrepancy issue in domain and scale makes fine-tuning fail to efficiently capture task-specific patterns, especially in low data regime. To address this issue, we propose Task-guided Disentangled Tuning (TDT) for PLMs, which enhances the generalization of representations by disentangling task-relevant signals from the entangled representations. For a given task, we introduce a learnable confidence model to detect indicative guidance from context, and further propose a disentangled regularization to mitigate the over-reliance problem. Experimental results on GLUE and CLUE benchmarks show that TDT gives consistently better results than fine-tuning with different PLMs, and extensive analysis demonstrates the effectiveness and robustness of our method. Code is available at \url{https://github.com/lemon0830/TDT}.
null
null
10.18653/v1/2022.findings-acl.247
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,172
inproceedings
cao-etal-2022-exploring
Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.248/
Cao, Rui and Wang, Yihao and Liang, Yuxin and Gao, Ling and Zheng, Jie and Ren, Jie and Wang, Zheng
Findings of the Association for Computational Linguistics: ACL 2022
3138--3152
Contrastive learning is emerging as a powerful technique for extracting knowledge from unlabeled data. This technique requires a balanced mixture of two ingredients: positive (similar) and negative (dissimilar) samples. This is typically achieved by maintaining a queue of negative samples during training. Prior works in the area typically uses a fixed-length negative sample queue, but how the negative sample size affects the model performance remains unclear. The opaque impact of the number of negative samples on performance when employing contrastive learning aroused our in-depth exploration. This paper presents a momentum contrastive learning model with negative sample queue for sentence embedding, namely MoCoSE. We add the prediction layer to the online branch to make the model asymmetric and together with EMA update mechanism of the target branch to prevent the model from collapsing. We define a maximum traceable distance metric, through which we learn to what extent the text contrastive learning benefits from the historical information of negative samples. Our experiments find that the best results are obtained when the maximum traceable distance is at a certain range, demonstrating that there is an optimal range of historical information for a negative sample queue. We evaluate the proposed unsupervised MoCoSE on the semantic text similarity (STS) task and obtain an average Spearman`s correlation of 77.27{\%}. Source code is available here.
null
null
10.18653/v1/2022.findings-acl.248
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,173
inproceedings
singh-singh-2022-inefficiency
The Inefficiency of Language Models in Scholarly Retrieval: An Experimental Walk-through
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.249/
Singh, Shruti and Singh, Mayank
Findings of the Association for Computational Linguistics: ACL 2022
3153--3173
Language models are increasingly becoming popular in AI-powered scientific IR systems. This paper evaluates popular scientific language models in handling (i) short-query texts and (ii) textual neighbors. Our experiments showcase the inability to retrieve relevant documents for a short-query text even under the most relaxed conditions. Additionally, we leverage textual neighbors, generated by small perturbations to the original text, to demonstrate that not all perturbations lead to close neighbors in the embedding space. Further, an exhaustive categorization yields several classes of orthographically and semantically related, partially related and completely unrelated neighbors. Retrieval performance turns out to be more influenced by the surface form rather than the semantics of the text.
null
null
10.18653/v1/2022.findings-acl.249
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,174
inproceedings
yuan-etal-2022-fusing
Fusing Heterogeneous Factors with Triaffine Mechanism for Nested Named Entity Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.250/
Yuan, Zheng and Tan, Chuanqi and Huang, Songfang and Huang, Fei
Findings of the Association for Computational Linguistics: ACL 2022
3174--3186
Nested entities are observed in many domains due to their compositionality, which cannot be easily recognized by the widely-used sequence labeling framework.A natural solution is to treat the task as a span classification problem. To learn better span representation and increase classification performance, it is crucial to effectively integrate heterogeneous factors including inside tokens, boundaries, labels, and related spans which could be contributing to nested entities recognition. To fuse these heterogeneous factors, we propose a novel triaffine mechanism including triaffine attention and scoring.Triaffine attention uses boundaries and labels as queries and uses inside tokens and related spans as keys and values for span representations.Triaffine scoring interacts with boundaries and span representations for classification. Experiments show that our proposed method outperforms previous span-based methods, achieves the state-of-the-art $F_1$ scores on nested NER datasets GENIA and KBP2017, and shows comparable results on ACE2004 and ACE2005.
null
null
10.18653/v1/2022.findings-acl.250
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,175
inproceedings
li-etal-2022-unimo
{UNIMO}-2: End-to-End Unified Vision-Language Grounded Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.251/
Li, Wei and Gao, Can and Niu, Guocheng and Xiao, Xinyan and Liu, Hao and Liu, Jiachen and Wu, Hua and Wang, Haifeng
Findings of the Association for Computational Linguistics: ACL 2022
3187--3201
Vision-Language Pre-training (VLP) has achieved impressive performance on various cross-modal downstream tasks. However, most existing methods can only learn from aligned image-caption data and rely heavily on expensive regional features, which greatly limits their scalability and performance. In this paper, we propose an end-to-end unified-modal pre-training framework, namely UNIMO-2, for joint learning on both aligned image-caption data and unaligned image-only and text-only corpus. We build a unified Transformer model to jointly learn visual representations, textual representations and semantic alignment between images and texts. In particular, we propose to conduct grounded learning on both images and texts via a sharing grounded space, which helps bridge unaligned images and texts, and align the visual and textual semantic spaces on different types of corpora. The experiments show that our grounded learning method can improve textual and visual semantic alignment for improving performance on various cross-modal tasks. Moreover, benefiting from effective joint modeling of different types of corpora, our model also achieves impressive performance on single-modal visual and textual tasks. Our code and models are public at the UNIMO project page \url{https://unimo-ptm.github.io/}.
null
null
10.18653/v1/2022.findings-acl.251
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,176
inproceedings
li-etal-2022-past
The Past Mistake is the Future Wisdom: Error-driven Contrastive Probability Optimization for {C}hinese Spell Checking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.252/
Li, Yinghui and Zhou, Qingyu and Li, Yangning and Li, Zhongli and Liu, Ruiyang and Sun, Rongyi and Wang, Zizhen and Li, Chao and Cao, Yunbo and Zheng, Hai-Tao
Findings of the Association for Computational Linguistics: ACL 2022
3202--3213
Chinese Spell Checking (CSC) aims to detect and correct Chinese spelling errors, which are mainly caused by the phonological or visual similarity. Recently, pre-trained language models (PLMs) promote the progress of CSC task. However, there exists a gap between the learned knowledge of PLMs and the goal of CSC task. PLMs focus on the semantics in text and tend to correct the erroneous characters to semantically proper or commonly used ones, but these aren`t the ground-truth corrections. To address this issue, we propose an Error-driven COntrastive Probability Optimization (ECOPO) framework for CSC task. ECOPO refines the knowledge representations of PLMs, and guides the model to avoid predicting these common characters through an error-driven way. Particularly, ECOPO is model-agnostic and it can be combined with existing CSC methods to achieve better performance. Extensive experiments and detailed analyses on SIGHAN datasets demonstrate that ECOPO is simple yet effective.
null
null
10.18653/v1/2022.findings-acl.252
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,177
inproceedings
xu-etal-2022-xfund
{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.253/
Xu, Yiheng and Lv, Tengchao and Cui, Lei and Wang, Guoxin and Lu, Yijuan and Florencio, Dinei and Zhang, Cha and Wei, Furu
Findings of the Association for Computational Linguistics: ACL 2022
3214--3224
Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at \url{https://aka.ms/layoutxlm}.
null
null
10.18653/v1/2022.findings-acl.253
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,178
inproceedings
lai-etal-2022-type
Type-Driven Multi-Turn Corrections for Grammatical Error Correction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.254/
Lai, Shaopeng and Zhou, Qingyu and Zeng, Jiali and Li, Zhongli and Li, Chao and Cao, Yunbo and Su, Jinsong
Findings of the Association for Computational Linguistics: ACL 2022
3225--3236
Grammatical Error Correction (GEC) aims to automatically detect and correct grammatical errors. In this aspect, dominant models are trained by one-iteration learning while performing multiple iterations of corrections during inference. Previous studies mainly focus on the data augmentation approach to combat the exposure bias, which suffers from two drawbacks. First, they simply mix additionally-constructed training instances and original ones to train models, which fails to help models be explicitly aware of the procedure of gradual corrections. Second, they ignore the interdependence between different types of corrections. In this paper, we propose a Type-Driven Multi-Turn Corrections approach for GEC. Using this approach, from each training instance, we additionally construct multiple training instances, each of which involves the correction of a specific type of errors. Then, we use these additionally-constructed training instances and the original one to train the model in turn. Experimental results and in-depth analysis show that our approach significantly benefits the model training. Particularly, our enhanced model achieves state-of-the-art single-model performance on English GEC benchmarks. We release our code at Github.
null
null
10.18653/v1/2022.findings-acl.254
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,179
inproceedings
fang-etal-2022-leveraging
Leveraging Knowledge in Multilingual Commonsense Reasoning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.255/
Fang, Yuwei and Wang, Shuohang and Xu, Yichong and Xu, Ruochen and Sun, Siqi and Zhu, Chenguang and Zeng, Michael
Findings of the Association for Computational Linguistics: ACL 2022
3237--3246
Commonsense reasoning (CSR) requires models to be equipped with general world knowledge. While CSR is a language-agnostic process, most comprehensive knowledge sources are restricted to a small number of languages, especially English. Thus, it remains unclear how to effectively conduct multilingual commonsense reasoning (XCSR) for various languages. In this work, we propose to use English as a pivot language, utilizing English knowledge sources for our our commonsense reasoning framework via a translate-retrieve-translate (TRT) strategy. For multilingual commonsense questions and answer candidates, we collect related knowledge via translation and retrieval from the knowledge in the source language. The retrieved knowledge is then translated into the target language and integrated into a pre-trained multilingual language model via visible knowledge attention. Then we utilize a diverse of four English knowledge sources to provide more comprehensive coverage of knowledge in different formats. Extensive results on the XCSR benchmark demonstrate that TRT with external knowledge can significantly improve multilingual commonsense reasoning in both zero-shot and translate-train settings, consistently outperforming the state-of-the-art by more than 3{\%} on the multilingual commonsense reasoning benchmark X-CSQA and X-CODAH.
null
null
10.18653/v1/2022.findings-acl.255
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,180
inproceedings
xiang-etal-2022-encoding
Encoding and Fusing Semantic Connection and Linguistic Evidence for Implicit Discourse Relation Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.256/
Xiang, Wei and Wang, Bang and Dai, Lu and Mo, Yijun
Findings of the Association for Computational Linguistics: ACL 2022
3247--3257
Prior studies use one attention mechanism to improve contextual semantic representation learning for implicit discourse relation recognition (IDRR). However, diverse relation senses may benefit from different attention mechanisms. We also argue that some linguistic relation in between two words can be further exploited for IDRR. This paper proposes a Multi-Attentive Neural Fusion (MANF) model to encode and fuse both semantic connection and linguistic evidence for IDRR. In MANF, we design a Dual Attention Network (DAN) to learn and fuse two kinds of attentive representation for arguments as its semantic connection. We also propose an Offset Matrix Network (OMN) to encode the linguistic relations of word-pairs as linguistic evidence. Our MANF model achieves the state-of-the-art results on the PDTB 3.0 corpus.
null
null
10.18653/v1/2022.findings-acl.256
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,181
inproceedings
clarke-etal-2022-one
One Agent To Rule Them All: Towards Multi-agent Conversational {AI}
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.257/
Clarke, Christopher and Peper, Joseph and Krishnamurthy, Karthik and Talamonti, Walter and Leach, Kevin and Lasecki, Walter and Kang, Yiping and Tang, Lingjia and Mars, Jason
Findings of the Association for Computational Linguistics: ACL 2022
3258--3267
The increasing volume of commercially available conversational agents (CAs) on the market has resulted in users being burdened with learning and adopting multiple agents to accomplish their tasks. Though prior work has explored supporting a multitude of domains within the design of a single agent, the interaction experience suffers due to the large action space of desired capabilities. To address these problems, we introduce a new task BBAI: Black-Box Agent Integration, focusing on combining the capabilities of multiple black-box CAs at scale. We explore two techniques: question agent pairing and question response pairing aimed at resolving this task. Leveraging these techniques, we design One For All (OFA), a scalable system that provides a unified interface to interact with multiple CAs. Additionally, we introduce MARS: Multi-Agent Response Selection, a new encoder model for question response pairing that jointly encodes user question and agent response pairs. We demonstrate that OFA is able to automatically and accurately integrate an ensemble of commercially available CAs spanning disparate domains. Specifically, using the MARS encoder we achieve the highest accuracy on our BBAI task, outperforming strong baselines.
null
null
10.18653/v1/2022.findings-acl.257
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,182
inproceedings
hiraoka-etal-2022-word
Word-level Perturbation Considering Word Length and Compositional Subwords
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.258/
Hiraoka, Tatsuya and Takase, Sho and Uchiumi, Kei and Keyaki, Atsushi and Okazaki, Naoaki
Findings of the Association for Computational Linguistics: ACL 2022
3268--3275
We present two simple modifications for word-level perturbation: Word Replacement considering Length (WR-L) and Compositional Word Replacement (CWR).In conventional word replacement, a word in an input is replaced with a word sampled from the entire vocabulary, regardless of the length and context of the target word.WR-L considers the length of a target word by sampling words from the Poisson distribution.CWR considers the compositional candidates by restricting the source of sampling to related words that appear in subword regularization. Experimental results showed that the combination of WR-L and CWR improved the performance of text classification and machine translation.
null
null
10.18653/v1/2022.findings-acl.258
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,183
inproceedings
zhou-etal-2022-bridging
Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised {POS} Tagging
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.259/
Zhou, Houquan and Li, Yang and Li, Zhenghua and Zhang, Min
Findings of the Association for Computational Linguistics: ACL 2022
3276--3290
In recent years, large-scale pre-trained language models (PLMs) have made extraordinary progress in most NLP tasks. But, in the unsupervised POS tagging task, works utilizing PLMs are few and fail to achieve state-of-the-art (SOTA) performance. The recent SOTA performance is yielded by a Guassian HMM variant proposed by He et al. (2018). However, as a generative model, HMM makes very strong independence assumptions, making it very challenging to incorporate contexualized word representations from PLMs. In this work, we for the first time propose a neural conditional random field autoencoder (CRF-AE) model for unsupervised POS tagging. The discriminative encoder of CRF-AE can straightforwardly incorporate ELMo word representations. Moreover, inspired by feature-rich HMM, we reintroduce hand-crafted features into the decoder of CRF-AE. Finally, experiments clearly show that our model outperforms previous state-of-the-art models by a large margin on Penn Treebank and multilingual Universal Dependencies treebank v2.0.
null
null
10.18653/v1/2022.findings-acl.259
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,184
inproceedings
ji-etal-2022-controlling
Controlling the Focus of Pretrained Language Generation Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.260/
Ji, Jiabao and Kim, Yoon and Glass, James and He, Tianxing
Findings of the Association for Computational Linguistics: ACL 2022
3291--3306
The finetuning of pretrained transformer-based language generation models are typically conducted in an end-to-end manner, where the model learns to attend to relevant parts of the input by itself. However, there does not exist a mechanism to directly control the model`s focus. This work aims to develop a control mechanism by which a user can select spans of context as {\textquotedblleft}highlights{\textquotedblright} for the model to focus on, and generate relevant output. To achieve this goal, we augment a pretrained model with trainable {\textquotedblleft}focus vectors{\textquotedblright} that are directly applied to the model`s embeddings, while the model itself is kept fixed. These vectors, trained on automatic annotations derived from attribution methods, act as indicators for context importance. We test our approach on two core generation tasks: dialogue response generation and abstractive summarization. We also collect evaluation data where the highlight-generation pairs are annotated by humans. Our experiments show that the trained focus vectors are effective in steering the model to generate outputs that are relevant to user-selected highlights.
null
null
10.18653/v1/2022.findings-acl.260
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,185
inproceedings
iso-etal-2022-comparative
Comparative Opinion Summarization via Collaborative Decoding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.261/
Iso, Hayate and Wang, Xiaolan and Angelidis, Stefanos and Suhara, Yoshihiko
Findings of the Association for Computational Linguistics: ACL 2022
3307--3324
Opinion summarization focuses on generating summaries that reflect popular subjective information expressed in multiple online reviews. While generated summaries offer general and concise information about a particular hotel or product, the information may be insufficient to help the user compare multiple different choices. Thus, the user may still struggle with the question {\textquotedblleft}Which one should I pick?{\textquotedblright} In this paper, we propose the comparative opinion summarization task, which aims at generating two contrastive summaries and one common summary from two different candidate sets of reviews. We develop a comparative summarization framework CoCoSum, which consists of two base summarization models that jointly generate contrastive and common summaries. Experimental results on a newly created benchmark CoCoTrip show that CoCoSum can produce higher-quality contrastive and common summaries than state-of-the-art opinion summarization models. The dataset and code are available at \url{https://github.com/megagonlabs/cocosum}
null
null
10.18653/v1/2022.findings-acl.261
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,186
inproceedings
rudman-etal-2022-isoscore
{I}so{S}core: Measuring the Uniformity of Embedding Space Utilization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.262/
Rudman, William and Gillman, Nate and Rayne, Taylor and Eickhoff, Carsten
Findings of the Association for Computational Linguistics: ACL 2022
3325--3339
The recent success of distributed word representations has led to an increased interest in analyzing the properties of their spatial distribution. Several studies have suggested that contextualized word embedding models do not isotropically project tokens into vector space. However, current methods designed to measure isotropy, such as average random cosine similarity and the partition score, have not been thoroughly analyzed and are not appropriate for measuring isotropy. We propose IsoScore: a novel tool that quantifies the degree to which a point cloud uniformly utilizes the ambient vector space. Using rigorously designed tests, we demonstrate that IsoScore is the only tool available in the literature that accurately measures how uniformly distributed variance is across dimensions in vector space. Additionally, we use IsoScore to challenge a number of recent conclusions in the NLP literature that have been derived using brittle metrics of isotropy. We caution future studies from using existing tools to measure isotropy in contextualized embedding space as resulting conclusions will be misleading or altogether inaccurate.
null
null
10.18653/v1/2022.findings-acl.262
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,187
inproceedings
freitag-etal-2022-natural
A Natural Diet: Towards Improving Naturalness of Machine Translation Output
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.263/
Freitag, Markus and Vilar, David and Grangier, David and Cherry, Colin and Foster, George
Findings of the Association for Computational Linguistics: ACL 2022
3340--3353
Machine translation (MT) evaluation often focuses on accuracy and fluency, without paying much attention to translation style. This means that, even when considered accurate and fluent, MT output can still sound less natural than high quality human translations or text originally written in the target language. Machine translation output notably exhibits lower lexical diversity, and employs constructs that mirror those in the source sentence. In this work we propose a method for training MT systems to achieve a more natural style, i.e. mirroring the style of text originally written in the target language. Our method tags parallel training data according to the naturalness of the target side by contrasting language models trained on natural and translated data. Tagging data allows us to put greater emphasis on target sentences originally written in the target language. Automatic metrics show that the resulting models achieve lexical richness on par with human translations, mimicking a style much closer to sentences originally written in the target language. Furthermore, we find that their output is preferred by human experts when compared to the baseline translations.
null
null
10.18653/v1/2022.findings-acl.263
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,188
inproceedings
mather-etal-2022-stance
From Stance to Concern: Adaptation of Propositional Analysis to New Tasks and Domains
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.264/
Mather, Brodie and Dorr, Bonnie and Dalton, Adam and de Beaumont, William and Rambow, Owen and Schmer-Galunder, Sonja
Findings of the Association for Computational Linguistics: ACL 2022
3354--3367
We present a generalized paradigm for adaptation of propositional analysis (predicate-argument pairs) to new tasks and domains. We leverage an analogy between stances (belief-driven sentiment) and concerns (topical issues with moral dimensions/endorsements) to produce an explanatory representation. A key contribution is the combination of semi-automatic resource building for extraction of domain-dependent concern types (with 2-4 hours of human labor per domain) and an entirely automatic procedure for extraction of domain-independent moral dimensions and endorsement values. Prudent (automatic) selection of terms from propositional structures for lexical expansion (via semantic similarity) produces new moral dimension lexicons at three levels of granularity beyond a strong baseline lexicon. We develop a ground truth (GT) based on expert annotators and compare our concern detection output to GT, to yield 231{\%} improvement in recall over baseline, with only a 10{\%} loss in precision. F1 yields 66{\%} improvement over baseline and 97.8{\%} of human performance. Our lexically based approach yields large savings over approaches that employ costly human labor and model building. We provide to the community a newly expanded moral dimension/value lexicon, annotation guidelines, and GT.
null
null
10.18653/v1/2022.findings-acl.264
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,189
inproceedings
novotney-etal-2022-cue
{CUE} Vectors: Modular Training of Language Models Conditioned on Diverse Contextual Signals
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.265/
Novotney, Scott and Mukherjee, Sreeparna and Ahmed, Zeeshan and Stolcke, Andreas
Findings of the Association for Computational Linguistics: ACL 2022
3368--3379
We propose a framework to modularize the training of neural language models that use diverse forms of context by eliminating the need to jointly train context and within-sentence encoders. Our approach, contextual universal embeddings (CUE), trains LMs on one type of contextual data and adapts to novel context types. The model consists of a pretrained neural sentence LM, a BERT-based contextual encoder, and a masked transfomer decoder that estimates LM probabilities using sentence-internal and contextual evidence. When contextually annotated data is unavailable, our model learns to combine contextual and sentence-internal information using noisy oracle unigram embeddings as a proxy. Real context data can be introduced later and used to adapt a small number of parameters that map contextual data into the decoder`s embedding space. We validate the CUE framework on a NYTimes text corpus with multiple metadata types, for which the LM perplexity can be lowered from 36.6 to 27.4 by conditioning on context. Bootstrapping a contextual LM with only a subset of the metadata during training retains 85{\%} of the achievable gain. Training the model initially with proxy context retains 67{\%} of the perplexity gain after adapting to real context. Furthermore, we can swap one type of pretrained sentence LM for another without retraining the context encoders, by only adapting the decoder model. Overall, we obtain a modular framework that allows incremental, scalable training of context-enhanced LMs.
null
null
10.18653/v1/2022.findings-acl.265
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,190
inproceedings
galperin-etal-2022-cross
Cross-Lingual {UMLS} Named Entity Linking using {UMLS} Dictionary Fine-Tuning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.266/
Galperin, Rina and Schnapp, Shachar and Elhadad, Michael
Findings of the Association for Computational Linguistics: ACL 2022
3380--3390
We study cross-lingual UMLS named entity linking, where mentions in a given source language are mapped to UMLS concepts, most of which are labeled in English. Our cross-lingual framework includes an offline unsupervised construction of a translated UMLS dictionary and a per-document pipeline which identifies UMLS candidate mentions and uses a fine-tuned pretrained transformer language model to filter candidates according to context. Our method exploits a small dataset of manually annotated UMLS mentions in the source language and uses this supervised data in two ways: to extend the unsupervised UMLS dictionary and to fine-tune the contextual filtering of candidate mentions in full documents. We demonstrate results of our approach on both Hebrew and English. We achieve new state-of-the-art (SOTA) results on the Hebrew Camoni corpus, +8.9 F1 on average across three communities in the dataset. We also achieve new SOTA on the English dataset MedMentions with +7.3 F1.
null
null
10.18653/v1/2022.findings-acl.266
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,191
inproceedings
o-neill-etal-2022-aligned
Aligned Weight Regularizers for Pruning Pretrained Neural Networks
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.267/
O{'} Neill, James and Dutta, Sourav and Assem, Haytham
Findings of the Association for Computational Linguistics: ACL 2022
3391--3401
Pruning aims to reduce the number of parameters while maintaining performance close to the original network. This work proposes a novel \textit{self-distillation} based pruning strategy, whereby the representational similarity between the pruned and unpruned versions of the same network is maximized. Unlike previous approaches that treat distillation and pruning separately, we use distillation to inform the pruning criteria, without requiring a separate student network as in knowledge distillation. We show that the proposed \textit{cross-correlation objective for self-distilled pruning} implicitly encourages sparse solutions, naturally complementing magnitude-based pruning criteria. Experiments on the GLUE and XGLUE benchmarks show that self-distilled pruning increases mono- and cross-lingual language model performance. Self-distilled pruned models also outperform smaller Transformers with an equal number of parameters and are competitive against (6 times) larger distilled networks. We also observe that self-distillation (1) maximizes class separability, (2) increases the signal-to-noise ratio, and (3) converges faster after pruning steps, providing further insights into why self-distilled pruning improves generalization.
null
null
10.18653/v1/2022.findings-acl.267
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,192
inproceedings
zhao-etal-2022-consistent
Consistent Representation Learning for Continual Relation Extraction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.268/
Zhao, Kang and Xu, Hua and Yang, Jiangong and Gao, Kai
Findings of the Association for Computational Linguistics: ACL 2022
3402--3411
Continual relation extraction (CRE) aims to continuously train a model on data with new relations while avoiding forgetting old ones. Some previous work has proved that storing a few typical samples of old relations and replaying them when learning new relations can effectively avoid forgetting. However, these memory-based methods tend to overfit the memory samples and perform poorly on imbalanced datasets. To solve these challenges, a consistent representation learning method is proposed, which maintains the stability of the relation embedding by adopting contrastive learning and knowledge distillation when replaying memory. Specifically, supervised contrastive learning based on a memory bank is first used to train each new task so that the model can effectively learn the relation representation. Then, contrastive replay is conducted of the samples in memory and makes the model retain the knowledge of historical relations through memory knowledge distillation to prevent the catastrophic forgetting of the old task. The proposed method can better learn consistent representations to alleviate forgetting effectively. Extensive experiments on FewRel and TACRED datasets show that our method significantly outperforms state-of-the-art baselines and yield strong robustness on the imbalanced dataset.
null
null
10.18653/v1/2022.findings-acl.268
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,193
inproceedings
li-etal-2022-event
Event Transition Planning for Open-ended Text Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.269/
Li, Qintong and Li, Piji and Bi, Wei and Ren, Zhaochun and Lai, Yuxuan and Kong, Lingpeng
Findings of the Association for Computational Linguistics: ACL 2022
3412--3426
Open-ended text generation tasks, such as dialogue generation and story completion, require models to generate a coherent continuation given limited preceding context. The open-ended nature of these tasks brings new challenges to the neural auto-regressive text generators nowadays. Despite these neural models are good at producing human-like text, it is difficult for them to arrange causalities and relations between given facts and possible ensuing events. To bridge this gap, we propose a novel two-stage method which explicitly arranges the ensuing events in open-ended text generation. Our approach can be understood as a specially-trained coarse-to-fine algorithm, where an event transition planner provides a {\textquotedblleft}coarse{\textquotedblright} plot skeleton and a text generator in the second stage refines the skeleton. Experiments on two open-ended text generation tasks demonstrate that our proposed method effectively improves the quality of the generated text, especially in coherence and diversity. We will release the codes to the community for further exploration.
null
null
10.18653/v1/2022.findings-acl.269
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,194
inproceedings
jain-gandhi-2022-comprehensive
Comprehensive Multi-Modal Interactions for Referring Image Segmentation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.270/
Jain, Kanishk and Gandhi, Vineet
Findings of the Association for Computational Linguistics: ACL 2022
3427--3435
We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intra-modal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach`s performance on four benchmark datasets, showing considerable performance gains over the existing state-of-the-art (SOTA) methods.
null
null
10.18653/v1/2022.findings-acl.270
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,195
inproceedings
mao-etal-2022-metaweighting
{M}eta{W}eighting: Learning to Weight Tasks in Multi-Task Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.271/
Mao, Yuren and Wang, Zekai and Liu, Weiwei and Lin, Xuemin and Xie, Pengtao
Findings of the Association for Computational Linguistics: ACL 2022
3436--3448
Task weighting, which assigns weights on the including tasks during training, significantly matters the performance of Multi-task Learning (MTL); thus, recently, there has been an explosive interest in it. However, existing task weighting methods assign weights only based on the training loss, while ignoring the gap between the training loss and generalization loss. It degenerates MTL`s performance. To address this issue, the present paper proposes a novel task weighting algorithm, which automatically weights the tasks via a learning-to-learn paradigm, referred to as MetaWeighting. Extensive experiments are conducted to validate the superiority of our proposed method in multi-task text classification.
null
null
10.18653/v1/2022.findings-acl.271
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,196
inproceedings
gu-etal-2022-improving
Improving Controllable Text Generation with Position-Aware Weighted Decoding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.272/
Gu, Yuxuan and Feng, Xiaocheng and Ma, Sicheng and Wu, Jiaming and Gong, Heng and Qin, Bing
Findings of the Association for Computational Linguistics: ACL 2022
3449--3467
Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation. However, these models often suffer from a control strength/fluency trade-off problem as higher control strength is more likely to generate incoherent and repetitive text. In this paper, we illustrate this trade-off is arisen by the controller imposing the target attribute on the LM at improper positions. And we propose a novel framework based on existing weighted decoding methods called CAT-PAW, which introduces a lightweight regulator to adjust bias signals from the controller at different decoding positions. Experiments on positive sentiment control, topic control, and language detoxification show the effectiveness of our CAT-PAW upon 4 SOTA models.
null
null
10.18653/v1/2022.findings-acl.272
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,197
inproceedings
yao-etal-2022-prompt
Prompt Tuning for Discriminative Pre-trained Language Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.273/
Yao, Yuan and Dong, Bowen and Zhang, Ao and Zhang, Zhengyan and Xie, Ruobing and Liu, Zhiyuan and Lin, Leyu and Sun, Maosong and Wang, Jianyong
Findings of the Association for Computational Linguistics: ACL 2022
3468--3473
Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks. However, to the best of our knowledge, existing works focus on prompt-tuning generative PLMs that are pre-trained to generate target tokens, such as BERT. It is still unknown whether and how discriminative PLMs, e.g., ELECTRA, can be effectively prompt-tuned. In this work, we present DPT, the first prompt tuning framework for discriminative PLMs, which reformulates NLP tasks into a discriminative language modeling problem. Comprehensive experiments on text classification and question answering show that, compared with vanilla fine-tuning, DPT achieves significantly higher performance, and also prevents the unstable problem in tuning large PLMs in both full-set and low-resource settings.
null
null
10.18653/v1/2022.findings-acl.273
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,198
inproceedings
wu-etal-2022-two
Two Birds with One Stone: Unified Model Learning for Both Recall and Ranking in News Recommendation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.274/
Wu, Chuhan and Wu, Fangzhao and Qi, Tao and Huang, Yongfeng
Findings of the Association for Computational Linguistics: ACL 2022
3474--3480
Recall and ranking are two critical steps in personalized news recommendation. Most existing news recommender systems conduct personalized news recall and ranking separately with different models. However, maintaining multiple models leads to high computational cost and poses great challenges to meeting the online latency requirement of news recommender systems. In order to handle this problem, in this paper we propose UniRec, a unified method for recall and ranking in news recommendation. In our method, we first infer user embedding for ranking from the historical news click behaviors of a user using a user encoder model. Then we derive the user embedding for recall from the obtained user embedding for ranking by using it as the attention query to select a set of basis user embeddings which encode different general user interests and synthesize them into a user embedding for recall. The extensive experiments on benchmark dataset demonstrate that our method can improve both efficiency and effectiveness for recall and ranking in news recommendation.
null
null
10.18653/v1/2022.findings-acl.274
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,199
inproceedings
fang-etal-2022-take
What does it take to bake a cake? The {R}ecipe{R}ef corpus and anaphora resolution in procedural text
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.275/
Fang, Biaoyan and Baldwin, Timothy and Verspoor, Karin
Findings of the Association for Computational Linguistics: ACL 2022
3481--3495
Procedural text contains rich anaphoric phenomena, yet has not received much attention in NLP. To fill this gap, we investigate the textual properties of two types of procedural text, recipes and chemical patents, and generalize an anaphora annotation framework developed for the chemical domain for modeling anaphoric phenomena in recipes. We apply this framework to annotate the RecipeRef corpus with both bridging and coreference relations. Through comparison to chemical patents, we show the complexity of anaphora resolution in recipes. We demonstrate empirically that transfer learning from the chemical domain improves resolution of anaphora in recipes, suggesting transferability of general procedural knowledge.
null
null
10.18653/v1/2022.findings-acl.275
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,200
inproceedings
jiao-etal-2022-merit
{MERI}t: {M}eta-{P}ath {G}uided {C}ontrastive {L}earning for {L}ogical {R}easoning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.276/
Jiao, Fangkai and Guo, Yangyang and Song, Xuemeng and Nie, Liqiang
Findings of the Association for Computational Linguistics: ACL 2022
3496--3509
Logical reasoning is of vital importance to natural language understanding. Previous studies either employ graph-based models to incorporate prior knowledge about logical relations, or introduce symbolic logic into neural models through data augmentation. These methods, however, heavily depend on annotated training data, and thus suffer from over-fitting and poor generalization problems due to the dataset sparsity. To address these two problems, in this paper, we propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text, to perform self-supervised pre-training on abundant unlabeled text data. Two novel strategies serve as indispensable components of our method. In particular, a strategy based on meta-path is devised to discover the logical structure in natural texts, followed by a counterfactual data augmentation strategy to eliminate the information shortcut induced by pre-training. The experimental results on two challenging logical reasoning benchmarks, i.e., ReClor and LogiQA, demonstrate that our method outperforms the SOTA baselines with significant improvements.
null
null
10.18653/v1/2022.findings-acl.276
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,201
inproceedings
chen-etal-2022-x
{THE}-{X}: Privacy-Preserving Transformer Inference with Homomorphic Encryption
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.277/
Chen, Tianyu and Bao, Hangbo and Huang, Shaohan and Dong, Li and Jiao, Binxing and Jiang, Daxin and Zhou, Haoyi and Li, Jianxin and Wei, Furu
Findings of the Association for Computational Linguistics: ACL 2022
3510--3520
As more and more pre-trained language models adopt on-cloud deployment, the privacy issues grow quickly, mainly for the exposure of plain-text user data (e.g., search history, medical record, bank account). Privacy-preserving inference of transformer models is on the demand of cloud service users. To protect privacy, it is an attractive choice to compute only with ciphertext in homomorphic encryption (HE). However, enabling pre-trained models inference on ciphertext data is difficult due to the complex computations in transformer blocks, which are not supported by current HE tools yet. In this work, we introduce THE-X, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models developed by popular frameworks. THE-X proposes a workflow to deal with complex computation in transformer networks, including all the non-polynomial functions like GELU, softmax, and LayerNorm. Experiments reveal our proposed THE-X can enable transformer inference on encrypted data for different downstream tasks, all with negligible performance drop but enjoying the theory-guaranteed privacy-preserving advantage.
null
null
10.18653/v1/2022.findings-acl.277
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,202
inproceedings
kapoor-etal-2022-hldc
{HLDC}: {H}indi Legal Documents Corpus
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.278/
Kapoor, Arnav and Dhawan, Mudit and Goel, Anmol and T H, Arjun and Bhatnagar, Akshala and Agrawal, Vibhu and Agrawal, Amul and Bhattacharya, Arnab and Kumaraguru, Ponnurangam and Modi, Ashutosh
Findings of the Association for Computational Linguistics: ACL 2022
3521--3536
Many populous countries including India are burdened with a considerable backlog of legal cases. Development of automated systems that could process legal documents and augment legal practitioners can mitigate this. However, there is a dearth of high-quality corpora that is needed to develop such data-driven systems. The problem gets even more pronounced in the case of low resource languages such as Hindi. In this resource paper, we introduce the Hindi Legal Documents Corpus (HLDC), a corpus of more than 900K legal documents in Hindi. Documents are cleaned and structured to enable the development of downstream applications. Further, as a use-case for the corpus, we introduce the task of bail prediction. We experiment with a battery of models and propose a Multi-Task Learning (MTL) based model for the same. MTL models use summarization as an auxiliary task along with bail prediction as the main task. Experiments with different models are indicative of the need for further research in this area.
null
null
10.18653/v1/2022.findings-acl.278
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,203
inproceedings
sun-etal-2022-rethinking
Rethinking Document-level Neural Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.279/
Sun, Zewei and Wang, Mingxuan and Zhou, Hao and Zhao, Chengqi and Huang, Shujian and Chen, Jiajun and Li, Lei
Findings of the Association for Computational Linguistics: ACL 2022
3537--3548
This paper does not aim at introducing a novel model for document-level neural machine translation. Instead, we head back to the original Transformer model and hope to answer the following question: Is the capacity of current models strong enough for document-level translation? Interestingly, we observe that the original Transformer with appropriate training techniques can achieve strong results for document translation, even with a length of 2000 words. We evaluate this model and several recent approaches on nine document-level datasets and two sentence-level datasets across six languages. Experiments show that document-level Transformer models outperforms sentence-level ones and many previous methods in a comprehensive set of metrics, including BLEU, four lexical indices, three newly proposed assistant linguistic indicators, and human evaluation.
null
null
10.18653/v1/2022.findings-acl.279
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,204
inproceedings
bai-etal-2022-incremental
Incremental Intent Detection for Medical Domain with Contrast Replay Networks
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.280/
Bai, Guirong and He, Shizhu and Liu, Kang and Zhao, Jun
Findings of the Association for Computational Linguistics: ACL 2022
3549--3556
Conventional approaches to medical intent detection require fixed pre-defined intent categories. However, due to the incessant emergence of new medical intents in the real world, such requirement is not practical. Considering that it is computationally expensive to store and re-train the whole data every time new data and intents come in, we propose to incrementally learn emerged intents while avoiding catastrophically forgetting old intents. We first formulate incremental learning for medical intent detection. Then, we employ a memory-based method to handle incremental learning. We further propose to enhance the method with contrast replay networks, which use multilevel distillation and contrast objective to address training data imbalance and medical rare words respectively. Experiments show that the proposed method outperforms the state-of-the-art model by 5.7{\%} and 9.1{\%} of accuracy on two benchmarks respectively.
null
null
10.18653/v1/2022.findings-acl.280
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,205
inproceedings
xu-etal-2022-laprador
{L}a{P}ra{D}o{R}: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.281/
Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian
Findings of the Association for Computational Linguistics: ACL 2022
3557--3569
In this paper, we propose LaPraDoR, a pretrained dual-tower dense retriever that does not require any supervised data for training. Specifically, we first present Iterative Contrastive Learning (ICoL) that iteratively trains the query and document encoders with a cache mechanism. ICoL not only enlarges the number of negative instances but also keeps representations of cached examples in the same hidden space. We then propose Lexicon-Enhanced Dense Retrieval (LEDR) as a simple yet effective way to enhance dense retrieval with lexical matching. We evaluate LaPraDoR on the recently proposed BEIR benchmark, including 18 datasets of 9 zero-shot text retrieval tasks. Experimental results show that LaPraDoR achieves state-of-the-art performance compared with supervised dense retrieval models, and further analysis reveals the effectiveness of our training strategy and objectives. Compared to re-ranking, our lexicon-enhanced approach can be run in milliseconds (22.5x faster) while achieving superior performance.
null
null
10.18653/v1/2022.findings-acl.281
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,206
inproceedings
lv-etal-2022-pre
Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.282/
Lv, Xin and Lin, Yankai and Cao, Yixin and Hou, Lei and Li, Juanzi and Liu, Zhiyuan and Li, Peng and Zhou, Jie
Findings of the Association for Computational Linguistics: ACL 2022
3570--3581
In recent years, pre-trained language models (PLMs) have been shown to capture factual knowledge from massive texts, which encourages the proposal of PLM-based knowledge graph completion (KGC) models. However, these models are still quite behind the SOTA KGC models in terms of performance. In this work, we find two main reasons for the weak performance: (1) Inaccurate evaluation setting. The evaluation setting under the closed-world assumption (CWA) may underestimate the PLM-based KGC models since they introduce more external knowledge; (2) Inappropriate utilization of PLMs. Most PLM-based KGC models simply splice the labels of entities and relations as inputs, leading to incoherent sentences that do not take full advantage of the implicit knowledge in PLMs. To alleviate these problems, we highlight a more accurate evaluation setting under the open-world assumption (OWA), which manual checks the correctness of knowledge that is not in KGs. Moreover, motivated by prompt tuning, we propose a novel PLM-based KGC model named PKGC. The basic idea is to convert each triple and its support information into natural prompt sentences, which is further fed into PLMs for classification. Experiment results on two KGC datasets demonstrate OWA is more reliable for evaluating KGC, especially on the link prediction, and the effectiveness of our PKCG model on both CWA and OWA settings.
null
null
10.18653/v1/2022.findings-acl.282
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,207
inproceedings
zhao-yao-2022-eico
{EICO}: Improving Few-Shot Text Classification via Explicit and Implicit Consistency Regularization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.283/
Zhao, Lei and Yao, Cheng
Findings of the Association for Computational Linguistics: ACL 2022
3582--3587
While the prompt-based fine-tuning methods had advanced few-shot natural language understanding tasks, self-training methods are also being explored. This work revisits the consistency regularization in self-training and presents explicit and implicit consistency regularization enhanced language model (EICO). By employing both explicit and implicit consistency regularization, EICO advances the performance of prompt-based few-shot text classification. For implicit consistency regularization, we generate pseudo-label from the weakly-augmented view and predict pseudo-label from the strongly-augmented view. For explicit consistency regularization, we minimize the difference between the prediction of the augmentation view and the prediction of the original view. We conducted extensive experiments on six text classification datasets and found that with sixteen labeled examples, EICO achieves competitive performance compared to existing self-training few-shot learning methods.
null
null
10.18653/v1/2022.findings-acl.283
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,208
inproceedings
zhang-etal-2022-improving
Improving the Adversarial Robustness of {NLP} Models by Information Bottleneck
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.284/
Zhang, Cenyuan and Zhou, Xiang and Wan, Yixin and Zheng, Xiaoqing and Chang, Kai-Wei and Hsieh, Cho-Jui
Findings of the Association for Computational Linguistics: ACL 2022
3588--3598
Existing studies have demonstrated that adversarial examples can be directly attributed to the presence of non-robust features, which are highly predictive, but can be easily manipulated by adversaries to fool NLP models. In this study, we explore the feasibility of capturing task-specific robust features, while eliminating the non-robust ones by using the information bottleneck theory. Through extensive experiments, we show that the models trained with our information bottleneck-based method are able to achieve a significant improvement in robust accuracy, exceeding performances of all the previously reported defense methods while suffering almost no performance drop in clean accuracy on SST-2, AGNEWS and IMDB datasets.
null
null
10.18653/v1/2022.findings-acl.284
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,209
inproceedings
zhang-etal-2022-incorporating
Incorporating Dynamic Semantics into Pre-Trained Language Model for Aspect-based Sentiment Analysis
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.285/
Zhang, Kai and Zhang, Kun and Zhang, Mengdi and Zhao, Hongke and Liu, Qi and Wu, Wei and Chen, Enhong
Findings of the Association for Computational Linguistics: ACL 2022
3599--3610
Aspect-based sentiment analysis (ABSA) predicts sentiment polarity towards a specific aspect in the given sentence. While pre-trained language models such as BERT have achieved great success, incorporating dynamic semantic changes into ABSA remains challenging. To this end, in this paper, we propose to address this problem by Dynamic Re-weighting BERT (DR-BERT), a novel method designed to learn dynamic aspect-oriented semantics for ABSA. Specifically, we first take the Stack-BERT layers as a primary encoder to grasp the overall semantic of the sentence and then fine-tune it by incorporating a lightweight Dynamic Re-weighting Adapter (DRA). Note that the DRA can pay close attention to a small region of the sentences at each step and re-weigh the vitally important words for better aspect-aware sentiment understanding. Finally, experimental results on three benchmark datasets demonstrate the effectiveness and the rationality of our proposed model and provide good interpretable insights for future semantic modeling.
null
null
10.18653/v1/2022.findings-acl.285
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,210
inproceedings
xing-tsang-2022-darer
{DARER}: Dual-task Temporal Relational Recurrent Reasoning Network for Joint Dialog Sentiment Classification and Act Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.286/
Xing, Bowen and Tsang, Ivor
Findings of the Association for Computational Linguistics: ACL 2022
3611--3621
The task of joint dialog sentiment classification (DSC) and act recognition (DAR) aims to simultaneously predict the sentiment label and act label for each utterance in a dialog. In this paper, we put forward a new framework which models the explicit dependencies via integrating \textit{prediction-level interactions} other than semantics-level interactions, more consistent with human intuition.Besides, we propose a speaker-aware temporal graph (SATG) and a dual-task relational temporal graph (DRTG) to introduce \textit{temporal relations} into dialog understanding and dual-task reasoning. To implement our framework, we propose a novel model dubbed DARER, which first generates the context-, speaker- and temporal-sensitive utterance representations via modeling SATG, then conducts recurrent dual-task relational reasoning on DRTG, in which process the estimated label distributions act as key clues in prediction-level interactions.Experiment results show that DARER outperforms existing models by large margins while requiring much less computation resource and costing less training time.Remarkably, on DSC task in Mastodon, DARER gains a relative improvement of about 25{\%} over previous best model in terms of F1, with less than 50{\%} parameters and about only 60{\%} required GPU memory.
null
null
10.18653/v1/2022.findings-acl.286
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,211
inproceedings
zou-etal-2022-divide
Divide and Conquer: Text Semantic Matching with Disentangled Keywords and Intents
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.287/
Zou, Yicheng and Liu, Hongwei and Gui, Tao and Wang, Junzhe and Zhang, Qi and Tang, Meng and Li, Haixiang and Wang, Daniell
Findings of the Association for Computational Linguistics: ACL 2022
3622--3632
Text semantic matching is a fundamental task that has been widely used in various scenarios, such as community question answering, information retrieval, and recommendation. Most state-of-the-art matching models, e.g., BERT, directly perform text comparison by processing each word uniformly. However, a query sentence generally comprises content that calls for different levels of matching granularity. Specifically, keywords represent factual information such as action, entity, and event that should be strictly matched, while intents convey abstract concepts and ideas that can be paraphrased into various expressions. In this work, we propose a simple yet effective training strategy for text semantic matching in a divide-and-conquer manner by disentangling keywords from intents. Our approach can be easily combined with pre-trained language models (PLM) without influencing their inference efficiency, achieving stable performance improvements against a wide range of PLMs on three benchmarks.
null
null
10.18653/v1/2022.findings-acl.287
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,212
inproceedings
chen-etal-2022-modular
Modular Domain Adaptation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.288/
Chen, Junshen and Card, Dallas and Jurafsky, Dan
Findings of the Association for Computational Linguistics: ACL 2022
3633--3655
Off-the-shelf models are widely used by computational social science researchers to measure properties of text, such as sentiment. However, without access to source data it is difficult to account for domain shift, which represents a threat to validity. Here, we treat domain adaptation as a modular process that involves separate model producers and model consumers, and show how they can independently cooperate to facilitate more accurate measurements of text. We introduce two lightweight techniques for this scenario, and demonstrate that they reliably increase out-of-domain accuracy on four multi-domain text classification datasets when used with linear and contextual embedding models. We conclude with recommendations for model producers and consumers, and release models and replication code to accompany this paper.
null
null
10.18653/v1/2022.findings-acl.288
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,213
inproceedings
yoo-etal-2022-detection
Detection of Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.289/
Yoo, KiYoon and Kim, Jangho and Jang, Jiho and Kwak, Nojun
Findings of the Association for Computational Linguistics: ACL 2022
3656--3672
Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years. As a countermeasure, adversarial defense has been explored, but relatively few efforts have been made to detect adversarial examples. However, detecting adversarial examples may be crucial for automated tasks (e.g. review sentiment analysis) that wish to amass information about a certain population and additionally be a step towards a robust defense system. To this end, we release a dataset for four popular attack methods on four datasets and four models to encourage further research in this field. Along with it, we propose a competitive baseline based on density estimation that has the highest auc on 29 out of 30 dataset-attack-model combinations. The source code is released (\url{https://github.com/bangawayoo/adversarial-examples-in-text-classification}).
null
null
10.18653/v1/2022.findings-acl.289
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,214
inproceedings
singh-goshtasbpour-2022-platt
{P}latt-Bin: Efficient Posterior Calibrated Training for {NLP} Classifiers
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.290/
Singh, Rishabh and Goshtasbpour, Shirin
Findings of the Association for Computational Linguistics: ACL 2022
3673--3684
Modern NLP classifiers are known to return uncalibrated estimations of class posteriors. Existing methods for posterior calibration rescale the predicted probabilities but often have an adverse impact on final classification accuracy, thus leading to poorer generalization. We propose an end-to-end trained calibrator, Platt-Binning, that directly optimizes the objective while minimizing the difference between the predicted and empirical posterior probabilities. Our method leverages the sample efficiency of Platt scaling and the verification guarantees of histogram binning, thus not only reducing the calibration error but also improving task performance. In contrast to existing calibrators, we perform this efficient calibration during training. Empirical evaluation of benchmark NLP classification tasks echoes the efficacy of our proposal.
null
null
10.18653/v1/2022.findings-acl.290
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,215
inproceedings
yang-etal-2022-addressing
Addressing Resource and Privacy Constraints in Semantic Parsing Through Data Augmentation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.291/
Yang, Kevin and Deng, Olivia and Chen, Charles and Shin, Richard and Roy, Subhro and Van Durme, Benjamin
Findings of the Association for Computational Linguistics: ACL 2022
3685--3695
We introduce a novel setup for low-resource task-oriented semantic parsing which incorporates several constraints that may arise in real-world scenarios: (1) lack of similar datasets/models from a related domain, (2) inability to sample useful logical forms directly from a grammar, and (3) privacy requirements for unlabeled natural utterances. Our goal is to improve a low-resource semantic parser using utterances collected through user interactions. In this highly challenging but realistic setting, we investigate data augmentation approaches involving generating a set of structured canonical utterances corresponding to logical forms, before simulating corresponding natural language and filtering the resulting pairs. We find that such approaches are effective despite our restrictive setup: in a low-resource setting on the complex SMCalFlow calendaring dataset (Andreas et al. 2020), we observe 33{\%} relative improvement over a non-data-augmented baseline in top-1 match.
null
null
10.18653/v1/2022.findings-acl.291
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,216
inproceedings
lai-etal-2022-improving
Improving Candidate Retrieval with Entity Profile Generation for {W}ikidata Entity Linking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.292/
Lai, Tuan and Ji, Heng and Zhai, ChengXiang
Findings of the Association for Computational Linguistics: ACL 2022
3696--3711
Entity linking (EL) is the task of linking entity mentions in a document to referent entities in a knowledge base (KB). Many previous studies focus on Wikipedia-derived KBs. There is little work on EL over Wikidata, even though it is the most extensive crowdsourced KB. The scale of Wikidata can open up many new real-world applications, but its massive number of entities also makes EL challenging. To effectively narrow down the search space, we propose a novel candidate retrieval paradigm based on entity profiling. Wikidata entities and their textual fields are first indexed into a text search engine (e.g., Elasticsearch). During inference, given a mention and its context, we use a sequence-to-sequence (seq2seq) model to generate the profile of the target entity, which consists of its title and description. We use the profile to query the indexed search engine to retrieve candidate entities. Our approach complements the traditional approach of using a Wikipedia anchor-text dictionary, enabling us to further design a highly effective hybrid method for candidate retrieval. Combined with a simple cross-attention reranker, our complete EL framework achieves state-of-the-art results on three Wikidata-based datasets and strong performance on TACKBP-2010.
null
null
10.18653/v1/2022.findings-acl.292
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,217
inproceedings
clouatre-etal-2022-local
Local Structure Matters Most: Perturbation Study in {NLU}
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.293/
Clouatre, Louis and Parthasarathi, Prasanna and Zouaq, Amal and Chandar, Sarath
Findings of the Association for Computational Linguistics: ACL 2022
3712--3731
Recent research analyzing the sensitivity of natural language understanding models to word-order perturbations has shown that neural models are surprisingly insensitive to the order of words. In this paper, we investigate this phenomenon by developing order-altering perturbations on the order of words, subwords, and characters to analyze their effect on neural models' performance on language understanding tasks. We experiment with measuring the impact of perturbations to the local neighborhood of characters and global position of characters in the perturbed texts and observe that perturbation functions found in prior literature only affect the global ordering while the local ordering remains relatively unperturbed. We empirically show that neural models, invariant of their inductive biases, pretraining scheme, or the choice of tokenization, mostly rely on the local structure of text to build understanding and make limited use of the global structure.
null
null
10.18653/v1/2022.findings-acl.293
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,218
inproceedings
west-etal-2022-probing
Probing Factually Grounded Content Transfer with Factual Ablation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.294/
West, Peter and Quirk, Chris and Galley, Michel and Choi, Yejin
Findings of the Association for Computational Linguistics: ACL 2022
3732--3746
Despite recent success, large neural models often generate factually incorrect text. Compounding this is the lack of a standard automatic evaluation for factuality{--}it cannot be meaningfully improved if it cannot be measured. Grounded generation promises a path to solving both of these problems: models draw on a reliable external document (grounding) for factual information, simplifying the challenge of factuality. Measuring factuality is also simplified{--}to factual consistency, testing whether the generation agrees with the grounding, rather than all facts. Yet, without a standard automatic metric for factual consistency, factually grounded generation remains an open problem. We study this problem for content transfer, in which generations extend a prompt, using information from factual grounding. Particularly, this domain allows us to introduce the notion of factual ablation for automatically measuring factual consistency: this captures the intuition that the model should be less likely to produce an output given a less relevant grounding document. In practice, we measure this by presenting a model with two grounding documents, and the model should prefer to use the more factually relevant one. We contribute two evaluation sets to measure this. Applying our new evaluation, we propose multiple novel methods improving over strong baselines.
null
null
10.18653/v1/2022.findings-acl.294
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,219
inproceedings
hui-etal-2022-ed2lm
{ED}2{LM}: Encoder-Decoder to Language Model for Faster Document Re-ranking Inference
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.295/
Hui, Kai and Zhuang, Honglei and Chen, Tao and Qin, Zhen and Lu, Jing and Bahri, Dara and Ma, Ji and Gupta, Jai and Nogueira dos Santos, Cicero and Tay, Yi and Metzler, Donald
Findings of the Association for Computational Linguistics: ACL 2022
3747--3758
State-of-the-art neural models typically encode document-query pairs using cross-attention for re-ranking. To this end, models generally utilize an encoder-only (like BERT) paradigm or an encoder-decoder (like T5) approach. These paradigms, however, are not without flaws, i.e., running the model on all query-document pairs at inference-time incurs a significant computational cost. This paper proposes a new training and inference paradigm for re-ranking. We propose to finetune a pretrained encoder-decoder model using in the form of document to query generation. Subsequently, we show that this encoder-decoder architecture can be decomposed into a decoder-only language model during inference. This results in significant inference time speedups since the decoder-only architecture only needs to learn to interpret static encoder embeddings during inference. Our experiments show that this new paradigm achieves results that are comparable to the more expensive cross-attention ranking approaches while being up to 6.8X faster. We believe this work paves the way for more efficient neural rankers that leverage large pretrained models.
null
null
10.18653/v1/2022.findings-acl.295
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,220
inproceedings
deutsch-roth-2022-benchmarking
Benchmarking Answer Verification Methods for Question Answering-Based Summarization Evaluation Metrics
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.296/
Deutsch, Daniel and Roth, Dan
Findings of the Association for Computational Linguistics: ACL 2022
3759--3765
Question answering-based summarization evaluation metrics must automatically determine whether the QA model`s prediction is correct or not, a task known as answer verification. In this work, we benchmark the lexical answer verification methods which have been used by current QA-based metrics as well as two more sophisticated text comparison methods, BERTScore and LERC. We find that LERC out-performs the other methods in some settings while remaining statistically indistinguishable from lexical overlap in others. However, our experiments reveal that improved verification performance does not necessarily translate to overall QA-based metric quality: In some scenarios, using a worse verification method {---} or using none at all {---} has comparable performance to using the best verification method, a result that we attribute to properties of the datasets.
null
null
10.18653/v1/2022.findings-acl.296
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,221
inproceedings
jin-etal-2022-prior
Prior Knowledge and Memory Enriched Transformer for Sign Language Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.297/
Jin, Tao and Zhao, Zhou and Zhang, Meng and Zeng, Xingshan
Findings of the Association for Computational Linguistics: ACL 2022
3766--3775
This paper attacks the challenging problem of sign language translation (SLT), which involves not only visual and textual understanding but also additional prior knowledge learning (i.e. performing style, syntax). However, the majority of existing methods with vanilla encoder-decoder structures fail to sufficiently explore all of them. Based on this concern, we propose a novel method called Prior knowledge and memory Enriched Transformer (PET) for SLT, which incorporates the auxiliary information into vanilla transformer. Concretely, we develop gated interactive multi-head attention which associates the multimodal representation and global signing style with adaptive gated functions. One Part-of-Speech (POS) sequence generator relies on the associated information to predict the global syntactic structure, which is thereafter leveraged to guide the sentence generation. Besides, considering that the visual-textual context information, and additional auxiliary knowledge of a word may appear in more than one video, we design a multi-stream memory structure to obtain higher-quality translations, which stores the detailed correspondence between a word and its various relevant information, leading to a more comprehensive understanding for each word. We conduct extensive empirical studies on RWTH-PHOENIX-Weather-2014 dataset with both signer-dependent and signer-independent conditions. The quantitative and qualitative experimental results comprehensively reveal the effectiveness of PET.
null
null
10.18653/v1/2022.findings-acl.297
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,222
inproceedings
kogkalidis-wijnholds-2022-discontinuous
Discontinuous Constituency and {BERT}: A Case Study of {D}utch
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.298/
Kogkalidis, Konstantinos and Wijnholds, Gijs
Findings of the Association for Computational Linguistics: ACL 2022
3776--3785
In this paper, we set out to quantify the syntactic capacity of BERT in the evaluation regime of non-context free patterns, as occurring in Dutch. We devise a test suite based on a mildly context-sensitive formalism, from which we derive grammars that capture the linguistic phenomena of control verb nesting and verb raising. The grammars, paired with a small lexicon, provide us with a large collection of naturalistic utterances, annotated with verb-subject pairings, that serve as the evaluation test bed for an attention-based span selection probe. Our results, backed by extensive analysis, suggest that the models investigated fail in the implicit acquisition of the dependencies examined.
null
null
10.18653/v1/2022.findings-acl.298
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,223
inproceedings
fourrier-sagot-2022-probing
Probing Multilingual Cognate Prediction Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.299/
Fourrier, Cl{\'e}mentine and Sagot, Beno{\^i}t
Findings of the Association for Computational Linguistics: ACL 2022
3786--3801
Character-based neural machine translation models have become the reference models for cognate prediction, a historical linguistics task. So far, all linguistic interpretations about latent information captured by such models have been based on external analysis (accuracy, raw results, errors). In this paper, we investigate what probing can tell us about both models and previous interpretations, and learn that though our models store linguistic and diachronic information, they do not achieve it in previously assumed ways.
null
null
10.18653/v1/2022.findings-acl.299
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,224
inproceedings
lee-vajjala-2022-neural
A Neural Pairwise Ranking Model for Readability Assessment
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.300/
Lee, Justin and Vajjala, Sowmya
Findings of the Association for Computational Linguistics: ACL 2022
3802--3813
Automatic Readability Assessment (ARA), the task of assigning a reading level to a text, is traditionally treated as a classification problem in NLP research. In this paper, we propose the first neural, pairwise ranking approach to ARA and compare it with existing classification, regression, and (non-neural) ranking methods. We establish the performance of our approach by conducting experiments with three English, one French and one Spanish datasets. We demonstrate that our approach performs well in monolingual single/cross corpus testing scenarios and achieves a zero-shot cross-lingual ranking accuracy of over 80{\%} for both French and Spanish when trained on English data. Additionally, we also release a new parallel bilingual readability dataset, that could be useful for future research. To our knowledge, this paper proposes the first neural pairwise ranking model for ARA, and shows the first results of cross-lingual, zero-shot evaluation of ARA with neural models.
null
null
10.18653/v1/2022.findings-acl.300
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,225
inproceedings
saunders-etal-2022-first
First the Worst: Finding Better Gender Translations During Beam Search
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.301/
Saunders, Danielle and Sallis, Rosie and Byrne, Bill
Findings of the Association for Computational Linguistics: ACL 2022
3814--3823
Generating machine translations via beam search seeks the most likely output under a model. However, beam search has been shown to amplify demographic biases exhibited by a model. We aim to address this, focusing on gender bias resulting from systematic errors in grammatical gender translation. Almost all prior work on this problem adjusts the training data or the model itself. By contrast, our approach changes only the inference procedure. We constrain beam search to improve gender diversity in n-best lists, and rerank n-best lists using gender features obtained from the source sentence. Combining these strongly improves WinoMT gender translation accuracy for three language pairs without additional bilingual data or retraining. We also demonstrate our approach`s utility for consistently gendering named entities, and its flexibility to handle new gendered language beyond the binary.
null
null
10.18653/v1/2022.findings-acl.301
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,226
inproceedings
shin-etal-2022-dialogue
Dialogue Summaries as Dialogue States ({DS}2), Template-Guided Summarization for Few-shot Dialogue State Tracking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.302/
Shin, Jamin and Yu, Hangyeol and Moon, Hyeongdon and Madotto, Andrea and Park, Juneyoung
Findings of the Association for Computational Linguistics: ACL 2022
3824--3846
Annotating task-oriented dialogues is notorious for the expensive and difficult data collection process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this paper, we hypothesize that dialogue summaries are essentially unstructured dialogue states; hence, we propose to reformulate dialogue state tracking as a dialogue summarization problem. To elaborate, we train a text-to-text language model with synthetic template-based dialogue summaries, generated by a set of rules from the dialogue states. Then, the dialogue states can be recovered by inversely applying the summary generation rules. We empirically show that our method DS2 outperforms previous works on few-shot DST in MultiWoZ 2.0 and 2.1, in both cross-domain and multi-domain settings. Our method also exhibits vast speedup during both training and inference as it can generate all states at once. Finally, based on our analysis, we discover that the naturalness of the summary templates plays a key role for successful training.
null
null
10.18653/v1/2022.findings-acl.302
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,227
inproceedings
ren-etal-2022-unsupervised
Unsupervised Preference-Aware Language Identification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.303/
Ren, Xingzhang and Yang, Baosong and Liu, Dayiheng and Zhang, Haibo and Lv, Xiaoyu and Yao, Liang and Xie, Jun
Findings of the Association for Computational Linguistics: ACL 2022
3847--3852
Recognizing the language of ambiguous texts has become a main challenge in language identification (LID). When using multilingual applications, users have their own language preferences, which can be regarded as external knowledge for LID. Nevertheless, current studies do not consider the inter-personal variations due to the lack of user annotated training data. To fill this gap, we introduce preference-aware LID and propose a novel unsupervised learning strategy. Concretely, we construct pseudo training set for each user by extracting training samples from a standard LID corpus according to his/her historical language distribution. Besides, we contribute the first user labeled LID test set called {\textquotedblleft}U-LID{\textquotedblright}. Experimental results reveal that our model can incarnate user traits and significantly outperforms existing LID systems on handling ambiguous texts. Our code and benchmark have been released.
null
null
10.18653/v1/2022.findings-acl.303
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,228
inproceedings
przybyla-shardlow-2022-using
Using {NLP} to quantify the environmental cost and diversity benefits of in-person {NLP} conferences
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.304/
Przyby{\l}a, Piotr and Shardlow, Matthew
Findings of the Association for Computational Linguistics: ACL 2022
3853--3863
The environmental costs of research are progressively important to the NLP community and their associated challenges are increasingly debated. In this work, we analyse the carbon cost (measured as CO2-equivalent) associated with journeys made by researchers attending in-person NLP conferences. We obtain the necessary data by text-mining all publications from the ACL anthology available at the time of the study (n=60,572) and extracting information about an author`s affiliation, including their address. This allows us to estimate the corresponding carbon cost and compare it to previously known values for training large models. Further, we look at the benefits of in-person conferences by demonstrating that they can increase participation diversity by encouraging attendance from the region surrounding the host country. We show how the trade-off between carbon cost and diversity of an event depends on its location and type. Our aim is to foster further discussion on the best way to address the joint issue of emissions and diversity in the future.
null
null
10.18653/v1/2022.findings-acl.304
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,229
inproceedings
luo-etal-2022-interpretable
Interpretable Research Replication Prediction via Variational Contextual Consistency Sentence Masking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.305/
Luo, Tianyi and Meng, Rui and Wang, Xin and Liu, Yang
Findings of the Association for Computational Linguistics: ACL 2022
3864--3876
Research Replication Prediction (RRP) is the task of predicting whether a published research result can be replicated or not. Building an interpretable neural text classifier for RRP promotes the understanding of why a research paper is predicted as replicable or non-replicable and therefore makes its real-world application more reliable and trustworthy. However, the prior works on model interpretation mainly focused on improving the model interpretability at the word/phrase level, which are insufficient especially for long research papers in RRP. Furthermore, the existing methods cannot utilize a large size of unlabeled dataset to further improve the model interpretability. To address these limitations, we aim to build an interpretable neural model which can provide sentence-level explanations and apply weakly supervised approach to further leverage the large corpus of unlabeled datasets to boost the interpretability in addition to improving prediction performance as existing works have done. In this work, we propose the Variational Contextual Consistency Sentence Masking (VCCSM) method to automatically extract key sentences based on the context in the classifier, using both labeled and unlabeled datasets. Results of our experiments on RRP along with European Convention of Human Rights (ECHR) datasets demonstrate that VCCSM is able to improve the model interpretability for the long document classification tasks using the area over the perturbation curve and post-hoc accuracy as evaluation metrics.
null
null
10.18653/v1/2022.findings-acl.305
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,230
inproceedings
jiang-etal-2022-chinese
{C}hinese Synesthesia Detection: New Dataset and Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.306/
Jiang, Xiaotong and Zhao, Qingqing and Long, Yunfei and Wang, Zhongqing
Findings of the Association for Computational Linguistics: ACL 2022
3877--3887
In this paper, we introduce a new task called synesthesia detection, which aims to extract the sensory word of a sentence, and to predict the original and synesthetic sensory modalities of the corresponding sensory word. Synesthesia refers to the description of perceptions in one sensory modality through concepts from other modalities. It involves not only a linguistic phenomenon, but also a cognitive phenomenon structuring human thought and action, which makes it become a bridge between figurative linguistic phenomenon and abstract cognition, and thus be helpful to understand the deep semantics. To address this, we construct a large-scale human-annotated Chinese synesthesia dataset, which contains 7,217 annotated sentences accompanied by 187 sensory words. Based on this dataset, we propose a family of strong and representative baseline models. Upon these baselines, we further propose a radical-based neural network model to identify the boundary of the sensory word, and to jointly detect the original and synesthetic sensory modalities for the word. Through extensive experiments, we observe that the importance of the proposed task and dataset can be verified by the statistics and progressive performances. In addition, our proposed model achieves state-of-the-art results on the synesthesia dataset.
null
null
10.18653/v1/2022.findings-acl.306
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,231
inproceedings
zhang-etal-2022-rethinking
Rethinking Offensive Text Detection as a Multi-Hop Reasoning Problem
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.307/
Zhang, Qiang and Naradowsky, Jason and Miyao, Yusuke
Findings of the Association for Computational Linguistics: ACL 2022
3888--3905
We introduce the task of implicit offensive text detection in dialogues, where a statement may have either an offensive or non-offensive interpretation, depending on the listener and context. We argue that reasoning is crucial for understanding this broader class of offensive utterances, and release SLIGHT, a dataset to support research on this task. Experiments using the data show that state-of-the-art methods of offense detection perform poorly when asked to detect implicitly offensive statements, achieving only ${\sim} 11\%$ accuracy. In contrast to existing offensive text detection datasets, SLIGHT features human-annotated chains of reasoning which describe the mental process by which an offensive interpretation can be reached from each ambiguous statement. We explore the potential for a multi-hop reasoning approach by utilizing existing entailment models to score the probability of these chains, and show that even naive reasoning models can yield improved performance in most situations. Analysis of the chains provides insight into the human interpretation process and emphasizes the importance of incorporating additional commonsense knowledge.
null
null
10.18653/v1/2022.findings-acl.307
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,232
inproceedings
sun-etal-2022-safety
On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.308/
Sun, Hao and Xu, Guangxuan and Deng, Jiawen and Cheng, Jiale and Zheng, Chujie and Zhou, Hao and Peng, Nanyun and Zhu, Xiaoyan and Huang, Minlie
Findings of the Association for Computational Linguistics: ACL 2022
3906--3923
Dialogue safety problems severely limit the real-world deployment of neural conversational models and have attracted great research interests recently. However, dialogue safety problems remain under-defined and the corresponding dataset is scarce. We propose a taxonomy for dialogue safety specifically designed to capture unsafe behaviors in human-bot dialogue settings, with focuses on context-sensitive unsafety, which is under-explored in prior works. To spur research in this direction, we compile DiaSafety, a dataset with rich context-sensitive unsafe examples. Experiments show that existing safety guarding tools fail severely on our dataset. As a remedy, we train a dialogue safety classifier to provide a strong baseline for context-sensitive dialogue unsafety detection. With our classifier, we perform safety evaluations on popular conversational models and show that existing dialogue systems still exhibit concerning context-sensitive safety problems.
null
null
10.18653/v1/2022.findings-acl.308
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,233
inproceedings
tong-etal-2022-word
Word Segmentation by Separation Inference for {E}ast {A}sian Languages
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.309/
Tong, Yu and Guo, Jingzhi and Zhou, Jizhe and Chen, Ge and Zheng, Guokai
Findings of the Association for Computational Linguistics: ACL 2022
3924--3934
Chinese Word Segmentation (CWS) intends to divide a raw sentence into words through sequence labeling. Thinking in reverse, CWS can also be viewed as a process of grouping a sequence of characters into a sequence of words. In such a way, CWS is reformed as a separation inference task in every adjacent character pair. Since every character is either connected or not connected to the others, the tagging schema is simplified as two tags {\textquotedblleft}Connection{\textquotedblright} (C) or {\textquotedblleft}NoConnection{\textquotedblright} (NC). Therefore, bigram is specially tailored for {\textquotedblleft}C-NC{\textquotedblright} to model the separation state of every two consecutive characters. Our Separation Inference (SpIn) framework is evaluated on five public datasets, is demonstrated to work for machine learning and deep learning models, and outperforms state-of-the-art performance for CWS in all experiments. Performance boosts on Japanese Word Segmentation (JWS) and Korean Word Segmentation (KWS) further prove the framework is universal and effective for East Asian Languages.
null
null
10.18653/v1/2022.findings-acl.309
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,234
inproceedings
li-etal-2022-unsupervised
Unsupervised {C}hinese Word Segmentation with {BERT} Oriented Probing and Transformation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.310/
Li, Wei and Song, Yuhan and Su, Qi and Shao, Yanqiu
Findings of the Association for Computational Linguistics: ACL 2022
3935--3940
Word Segmentation is a fundamental step for understanding Chinese language. Previous neural approaches for unsupervised Chinese Word Segmentation (CWS) only exploits shallow semantic information, which can miss important context. Large scale Pre-trained language models (PLM) have achieved great success in many areas because of its ability to capture the deep contextual semantic relation. In this paper, we propose to take advantage of the deep semantic information embedded in PLM (e.g., BERT) with a self-training manner, which iteratively probes and transforms the semantic information in PLM into explicit word segmentation ability. Extensive experiment results show that our proposed approach achieves state-of-the-art F1 score on two CWS benchmark datasets.
null
null
10.18653/v1/2022.findings-acl.310
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,235
inproceedings
chen-etal-2022-e
{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.311/
Chen, Jiangjie and Xu, Rui and Fu, Ziquan and Shi, Wei and Li, Zhongqiao and Zhang, Xinbo and Sun, Changzhi and Li, Lei and Xiao, Yanghua and Zhou, Hao
Findings of the Association for Computational Linguistics: ACL 2022
3941--3955
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
null
null
10.18653/v1/2022.findings-acl.311
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,236
inproceedings
zhao-etal-2022-implicit
Implicit Relation Linking for Question Answering over Knowledge Graph
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.312/
Zhao, Yao and Huang, Jiacheng and Hu, Wei and Chen, Qijin and Qiu, XiaoXia and Huo, Chengfu and Ren, Weijun
Findings of the Association for Computational Linguistics: ACL 2022
3956--3968
Relation linking (RL) is a vital module in knowledge-based question answering (KBQA) systems. It aims to link the relations expressed in natural language (NL) to the corresponding ones in knowledge graph (KG). Existing methods mainly rely on the textual similarities between NL and KG to build relation links. Due to the ambiguity of NL and the incompleteness of KG, many relations in NL are implicitly expressed, and may not link to a single relation in KG, which challenges the current methods. In this paper, we propose an implicit RL method called ImRL, which links relation phrases in NL to relation paths in KG. To find proper relation paths, we propose a novel path ranking model that aligns not only textual information in the word embedding space but also structural information in the KG embedding space between relation phrases in NL and relation paths in KG. Besides, we leverage a gated mechanism with attention to inject prior knowledge from external paraphrase dictionaries to address the relation phrases with vague meaning. Our experiments on two benchmark and a newly-created datasets show that ImRL significantly outperforms several state-of-the-art methods, especially for implicit RL.
null
null
10.18653/v1/2022.findings-acl.312
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,237
inproceedings
wan-etal-2022-attention
Attention Mechanism with Energy-Friendly Operations
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.313/
Wan, Yu and Yang, Baosong and Liu, Dayiheng and Xiao, Rong and Wong, Derek and Zhang, Haibo and Chen, Boxing and Chao, Lidia
Findings of the Association for Computational Linguistics: ACL 2022
3969--3976
Attention mechanism has become the dominant module in natural language processing models. It is computationally intensive and depends on massive power-hungry multiplications. In this paper, we rethink variants of attention mechanism from the energy consumption aspects. After reaching the conclusion that the energy costs of several energy-friendly operations are far less than their multiplication counterparts, we build a novel attention model by replacing multiplications with either selective operations or additions. Empirical results on three machine translation tasks demonstrate that the proposed model, against the vanilla one, achieves competitable accuracy while saving 99{\%} and 66{\%} energy during alignment calculation and the whole attention procedure. Our code will be released upon the acceptance.
null
null
10.18653/v1/2022.findings-acl.313
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,238
inproceedings
yamakoshi-etal-2022-probing
Probing {BERT}`s priors with serial reproduction chains
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.314/
Yamakoshi, Takateru and Griffiths, Thomas and Hawkins, Robert
Findings of the Association for Computational Linguistics: ACL 2022
3977--3992
Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT. The MLM objective yields a dependency network with no guarantee of consistent conditional distributions, posing a problem for naive approaches. Drawing from theories of iterated learning in cognitive science, we explore the use of \textit{serial reproduction chains} to sample from BERT`s priors. In particular, we observe that a unique and consistent estimator of the ground-truth joint distribution is given by a Generative Stochastic Network (GSN) sampler, which randomly selects which token to mask and reconstruct on each step. We show that the lexical and syntactic statistics of sentences from GSN chains closely match the ground-truth corpus distribution and perform better than other methods in a large corpus of naturalness judgments. Our findings establish a firmer theoretical foundation for bottom-up probing and highlight richer deviations from human priors.
null
null
10.18653/v1/2022.findings-acl.314
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,239
inproceedings
zhang-etal-2022-interpreting
Interpreting the Robustness of Neural {NLP} Models to Textual Perturbations
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.315/
Zhang, Yunxiang and Pan, Liangming and Tan, Samson and Kan, Min-Yen
Findings of the Association for Computational Linguistics: ACL 2022
3993--4007
Modern Natural Language Processing (NLP) models are known to be sensitive to input perturbations and their performance can decrease when applied to real-world, noisy data. However, it is still unclear why models are less robust to some perturbations than others. In this work, we test the hypothesis that the extent to which a model is affected by an unseen textual perturbation (robustness) can be explained by the learnability of the perturbation (defined as how well the model learns to identify the perturbation with a small amount of evidence). We further give a causal justification for the learnability metric. We conduct extensive experiments with four prominent NLP models {---} TextRNN, BERT, RoBERTa and XLNet {---} over eight types of textual perturbations on three datasets. We show that a model which is better at identifying a perturbation (higher learnability) becomes worse at ignoring such a perturbation at test time (lower robustness), providing empirical support for our hypothesis.
null
null
10.18653/v1/2022.findings-acl.315
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,240
inproceedings
xin-etal-2022-zero
Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.316/
Xin, Ji and Xiong, Chenyan and Srinivasan, Ashwin and Sharma, Ankita and Jose, Damien and Bennett, Paul
Findings of the Association for Computational Linguistics: ACL 2022
4008--4020
Dense retrieval (DR) methods conduct text retrieval by first encoding texts in the embedding space and then matching them by nearest neighbor search. This requires strong locality properties from the representation space, e.g., close allocations of each small group of relevant texts, which are hard to generalize to domains without sufficient training data. In this paper, we aim to improve the generalization ability of DR models from source training domains with rich supervision signals to target domains without any relevance label, in the zero-shot setting. To achieve that, we propose Momentum adversarial Domain Invariant Representation learning (MoDIR), which introduces a momentum method to train a domain classifier that distinguishes source versus target domains, and then adversarially updates the DR encoder to learn domain invariant representations. Our experiments show that MoDIR robustly outperforms its baselines on 10+ ranking datasets collected in the BEIR benchmark in the zero-shot setup, with more than 10{\%} relative gains on datasets with enough sensitivity for DR models' evaluation. Source code is available at \url{https://github.com/ji-xin/modir}.
null
null
10.18653/v1/2022.findings-acl.316
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,241
inproceedings
campagna-etal-2022-shot
A Few-Shot Semantic Parser for {W}izard-of-{O}z Dialogues with the Precise {T}hing{T}alk Representation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.317/
Campagna, Giovanni and Semnani, Sina and Kearns, Ryan and Koba Sato, Lucas Jun and Xu, Silei and Lam, Monica
Findings of the Association for Computational Linguistics: ACL 2022
4021--4034
Previous attempts to build effective semantic parsers for Wizard-of-Oz (WOZ) conversations suffer from the difficulty in acquiring a high-quality, manually annotated training set. Approaches based only on dialogue synthesis are insufficient, as dialogues generated from state-machine based models are poor approximations of real-life conversations. Furthermore, previously proposed dialogue state representations are ambiguous and lack the precision necessary for building an effective agent. This paper proposes a new dialogue representation and a sample-efficient methodology that can predict precise dialogue states in WOZ conversations. We extended the ThingTalk representation to capture all information an agent needs to respond properly. Our training strategy is sample-efficient: we combine (1) few-shot data sparsely sampling the full dialogue space and (2) synthesized data covering a subset space of dialogues generated by a succinct state-based dialogue model. The completeness of the extended ThingTalk language is demonstrated with a fully operational agent, which is also used in training data synthesis. We demonstrate the effectiveness of our methodology on MultiWOZ 3.0, a reannotation of the MultiWOZ 2.1 dataset in ThingTalk. ThingTalk can represent 98{\%} of the test turns, while the simulator can emulate 85{\%} of the validation set. We train a contextual semantic parser using our strategy, and obtain 79{\%} turn-by-turn exact match accuracy on the reannotated test set.
null
null
10.18653/v1/2022.findings-acl.317
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,242
inproceedings
yang-etal-2022-gcpg
{GCPG}: A General Framework for Controllable Paraphrase Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.318/
Yang, Kexin and Liu, Dayiheng and Lei, Wenqiang and Yang, Baosong and Zhang, Haibo and Zhao, Xue and Yao, Wenqing and Chen, Boxing
Findings of the Association for Computational Linguistics: ACL 2022
4035--4047
Controllable paraphrase generation (CPG) incorporates various external conditions to obtain desirable paraphrases. However, existing works only highlight a special condition under two indispensable aspects of CPG (i.e., lexically and syntactically CPG) individually, lacking a unified circumstance to explore and analyze their effectiveness. In this paper, we propose a general controllable paraphrase generation framework (GCPG), which represents both lexical and syntactical conditions as text sequences and uniformly processes them in an encoder-decoder paradigm. Under GCPG, we reconstruct commonly adopted lexical condition (i.e., Keywords) and syntactical conditions (i.e., Part-Of-Speech sequence, Constituent Tree, Masked Template and Sentential Exemplar) and study the combination of the two types. In particular, for Sentential Exemplar condition, we propose a novel exemplar construction method {---} Syntax-Similarity based Exemplar (SSE). SSE retrieves a syntactically similar but lexically different sentence as the exemplar for each target sentence, avoiding exemplar-side words copying problem. Extensive experiments demonstrate that GCPG with SSE achieves state-of-the-art performance on two popular benchmarks. In addition, the combination of lexical and syntactical conditions shows the significant controllable ability of paraphrase generation, and these empirical results could provide novel insight to user-oriented paraphrasing.
null
null
10.18653/v1/2022.findings-acl.318
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,243
inproceedings
gritta-etal-2022-crossaligner
{C}ross{A}ligner {\&} Co: Zero-Shot Transfer Methods for Task-Oriented Cross-lingual Natural Language Understanding
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.319/
Gritta, Milan and Hu, Ruoyu and Iacobacci, Ignacio
Findings of the Association for Computational Linguistics: ACL 2022
4048--4061
Task-oriented personal assistants enable people to interact with a host of devices and services using natural language. One of the challenges of making neural dialogue systems available to more users is the lack of training data for all but a few languages. Zero-shot methods try to solve this issue by acquiring task knowledge in a high-resource language such as English with the aim of transferring it to the low-resource language(s). To this end, we introduce CrossAligner, the principal method of a variety of effective approaches for zero-shot cross-lingual transfer based on learning alignment from unlabelled parallel data. We present a quantitative analysis of individual methods as well as their weighted combinations, several of which exceed state-of-the-art (SOTA) scores as evaluated across nine languages, fifteen test sets and three benchmark multilingual datasets. A detailed qualitative error analysis of the best methods shows that our fine-tuned language models can zero-shot transfer the task knowledge better than anticipated.
null
null
10.18653/v1/2022.findings-acl.319
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,244
inproceedings
ilinykh-dobnik-2022-attention
Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision Transformer
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.320/
Ilinykh, Nikolai and Dobnik, Simon
Findings of the Association for Computational Linguistics: ACL 2022
4062--4073
We explore how a multi-modal transformer trained for generation of longer image descriptions learns syntactic and semantic representations about entities and relations grounded in objects at the level of masked self-attention (text generation) and cross-modal attention (information fusion). We observe that cross-attention learns the visual grounding of noun phrases into objects and high-level semantic information about spatial relations, while text-to-text attention captures low-level syntactic knowledge between words. This concludes that language models in a multi-modal task learn different semantic information about objects and relations cross-modally and uni-modally (text-only). Our code is available here: \url{https://github.com/GU-CLASP/attention-as-grounding}.
null
null
10.18653/v1/2022.findings-acl.320
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,245
inproceedings
aepli-sennrich-2022-improving
Improving Zero-Shot Cross-lingual Transfer Between Closely Related Languages by Injecting Character-Level Noise
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.321/
Aepli, No{\"emi and Sennrich, Rico
Findings of the Association for Computational Linguistics: ACL 2022
4074--4083
Cross-lingual transfer between a high-resource language and its dialects or closely related language varieties should be facilitated by their similarity. However, current approaches that operate in the embedding space do not take surface similarity into account. This work presents a simple yet effective strategy to improve cross-lingual transfer between closely related varieties. We propose to augment the data of the high-resource source language with character-level noise to make the model more robust towards spelling variations. Our strategy shows consistent improvements over several languages and tasks: Zero-shot transfer of POS tagging and topic identification between language varieties from the Finnic, West and North Germanic, and Western Romance language branches. Our work provides evidence for the usefulness of simple surface-level noise in improving transfer between language varieties.
null
null
10.18653/v1/2022.findings-acl.321
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,246
inproceedings
li-etal-2022-structural
Structural Supervision for Word Alignment and Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.322/
Li, Lei and Fan, Kai and Li, Hongjia and Yuan, Chun
Findings of the Association for Computational Linguistics: ACL 2022
4084--4094
Syntactic structure has long been argued to be potentially useful for enforcing accurate word alignment and improving generalization performance of machine translation. Unfortunately, existing wisdom demonstrates its significance by considering only the syntactic structure of source tokens, neglecting the rich structural information from target tokens and the structural similarity between the source and target sentences. In this work, we propose to incorporate the syntactic structure of both source and target tokens into the encoder-decoder framework, tightly correlating the internal logic of word alignment and machine translation for multi-task learning. Particularly, we won`t leverage any annotated syntactic graph of the target side during training, so we introduce Dynamic Graph Convolution Networks (DGCN) on observed target tokens to sequentially and simultaneously generate the target tokens and the corresponding syntactic graphs, and further guide the word alignment. On this basis, Hierarchical Graph Random Walks (HGRW) are performed on the syntactic graphs of both source and target sides, for incorporating structured constraints on machine translation outputs. Experiments on four publicly available language pairs verify that our method is highly effective in capturing syntactic structure in different languages, consistently outperforming baselines in alignment accuracy and demonstrating promising results in translation quality.
null
null
10.18653/v1/2022.findings-acl.322
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,247
inproceedings
zhang-etal-2022-focus
Focus on the Action: Learning to Highlight and Summarize Jointly for Email To-Do Items Summarization
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.323/
Zhang, Kexun and Chen, Jiaao and Yang, Diyi
Findings of the Association for Computational Linguistics: ACL 2022
4095--4106
Automatic email to-do item generation is the task of generating to-do items from a given email to help people overview emails and schedule daily work. Different from prior research on email summarization, to-do item generation focuses on generating action mentions to provide more structured summaries of email text. Prior work either requires large amount of annotation for key sentences with potential actions or fails to pay attention to nuanced actions from these unstructured emails, and thus often lead to unfaithful summaries. To fill these gaps, we propose a simple and effective learning to highlight and summarize framework (LHS) to learn to identify the most salient text and actions, and incorporate these structured representations to generate more faithful to-do items. Experiments show that our LHS model outperforms the baselines and achieves the state-of-the-art performance in terms of both quantitative evaluation and human judgement. We also discussed specific challenges that current models faced with email to-do summarization.
null
null
10.18653/v1/2022.findings-acl.323
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,248
inproceedings
nagata-etal-2022-exploring
Exploring the Capacity of a Large-scale Masked Language Model to Recognize Grammatical Errors
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.324/
Nagata, Ryo and Kimura, Manabu and Hanawa, Kazuaki
Findings of the Association for Computational Linguistics: ACL 2022
4107--4118
In this paper, we explore the capacity of a language model-based method for grammatical error detection in detail. We first show that 5 to 10{\%} of training data are enough for a BERT-based error detection method to achieve performance equivalent to what a non-language model-based method can achieve with the full training data; recall improves much faster with respect to training data size in the BERT-based method than in the non-language model method. This suggests that (i) the BERT-based method should have a good knowledge of the grammar required to recognize certain types of error and that (ii) it can transform the knowledge into error detection rules by fine-tuning with few training samples, which explains its high generalization ability in grammatical error detection. We further show with pseudo error data that it actually exhibits such nice properties in learning rules for recognizing various types of error. Finally, based on these findings, we discuss a cost-effective method for detecting grammatical errors with feedback comments explaining relevant grammatical rules to learners.
null
null
10.18653/v1/2022.findings-acl.324
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,249
inproceedings
gidiotis-tsoumakas-2022-trust
Should We Trust This Summary? {B}ayesian Abstractive Summarization to The Rescue
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.325/
Gidiotis, Alexios and Tsoumakas, Grigorios
Findings of the Association for Computational Linguistics: ACL 2022
4119--4131
We explore the notion of uncertainty in the context of modern abstractive summarization models, using the tools of Bayesian Deep Learning. Our approach approximates Bayesian inference by first extending state-of-the-art summarization models with Monte Carlo dropout and then using them to perform multiple stochastic forward passes. Based on Bayesian inference we are able to effectively quantify uncertainty at prediction time. Having a reliable uncertainty measure, we can improve the experience of the end user by filtering out generated summaries of high uncertainty. Furthermore, uncertainty estimation could be used as a criterion for selecting samples for annotation, and can be paired nicely with active learning and human-in-the-loop approaches. Finally, Bayesian inference enables us to find a Bayesian summary which performs better than a deterministic one and is more robust to uncertainty. In practice, we show that our Variational Bayesian equivalents of BART and PEGASUS can outperform their deterministic counterparts on multiple benchmark datasets.
null
null
10.18653/v1/2022.findings-acl.325
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,250
inproceedings
zhu-etal-2022-data
On the data requirements of probing
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.326/
Zhu, Zining and Wang, Jixuan and Li, Bai and Rudzicz, Frank
Findings of the Association for Computational Linguistics: ACL 2022
4132--4147
As large and powerful neural language models are developed, researchers have been increasingly interested in developing diagnostic tools to probe them. There are many papers with conclusions of the form {\textquotedblleft}observation $X$ is found in model $Y${\textquotedblright}, using their own datasets with varying sizes. Larger probing datasets bring more reliability, but are also expensive to collect. There is yet to be a quantitative method for estimating reasonable probing dataset sizes. We tackle this omission in the context of comparing two probing configurations: after we have collected a small dataset from a pilot study, how many additional data samples are sufficient to distinguish two different configurations? We present a novel method to estimate the required number of data samples in such experiments and, across several case studies, we verify that our estimations have sufficient statistical power. Our framework helps to systematically construct probing datasets to diagnose neural NLP models.
null
null
10.18653/v1/2022.findings-acl.326
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,251
inproceedings
fomicheva-etal-2022-translation
Translation Error Detection as Rationale Extraction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.327/
Fomicheva, Marina and Specia, Lucia and Aletras, Nikolaos
Findings of the Association for Computational Linguistics: ACL 2022
4148--4159
Recent Quality Estimation (QE) models based on multilingual pre-trained representations have achieved very competitive results in predicting the overall quality of translated sentences. However, detecting specifically which translated words are incorrect is a more challenging task, especially when dealing with limited amounts of training data. We hypothesize that, not unlike humans, successful QE models rely on translation errors to predict overall sentence quality. By exploring a set of feature attribution methods that assign relevance scores to the inputs to explain model predictions, we study the behaviour of state-of-the-art sentence-level QE models and show that explanations (i.e. rationales) extracted from these models can indeed be used to detect translation errors. We therefore (i) introduce a novel semi-supervised method for word-level QE; and (ii) propose to use the QE task as a new benchmark for evaluating the plausibility of feature attribution, i.e. how interpretable model explanations are to humans.
null
null
10.18653/v1/2022.findings-acl.327
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,252
inproceedings
lin-etal-2022-towards
Towards Collaborative Neural-Symbolic Graph Semantic Parsing via Uncertainty
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.328/
Lin, Zi and Liu, Jeremiah Zhe and Shang, Jingbo
Findings of the Association for Computational Linguistics: ACL 2022
4160--4173
Recent work in task-independent graph semantic parsing has shifted from grammar-based symbolic approaches to neural models, showing strong performance on different types of meaning representations. However, it is still unclear that what are the limitations of these neural parsers, and whether these limitations can be compensated by incorporating symbolic knowledge into model inference. In this paper, we address these questions by taking English Resource Grammar (ERG) parsing as a case study. Specifically, we first develop a state-of-the-art, T5-based neural ERG parser, and conduct detail analyses of parser performance within fine-grained linguistic categories. The neural parser attains superior performance on in-distribution test set, but degrades significantly on long-tail situations, while the symbolic parser performs more robustly. To address this, we further propose a simple yet principled collaborative framework for neural-symbolic semantic parsing, by designing a decision criterion for beam search that incorporates the prior knowledge from a symbolic parser and accounts for model uncertainty. Experimental results show that the proposed framework yields comprehensive improvement over neural baseline across long-tail categories, yielding the best known Smatch score (97.01) on the well-studied DeepBank benchmark.
null
null
10.18653/v1/2022.findings-acl.328
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,253
inproceedings
wang-shang-2022-towards
Towards Few-shot Entity Recognition in Document Images: A Label-aware Sequence-to-Sequence Framework
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.329/
Wang, Zilong and Shang, Jingbo
Findings of the Association for Computational Linguistics: ACL 2022
4174--4186
Entity recognition is a fundamental task in understanding document images. Traditional sequence labeling frameworks treat the entity types as class IDs and rely on extensive data and high-quality annotations to learn semantics which are typically expensive in practice. In this paper, we aim to build an entity recognition model requiring only a few shots of annotated document images. To overcome the data limitation, we propose to leverage the label surface names to better inform the model of the target entity type semantics and also embed the labels into the spatial embedding space to capture the spatial correspondence between regions and labels. Specifically, we go beyond sequence labeling and develop a novel label-aware seq2seq framework, LASER. The proposed model follows a new labeling scheme that generates the label surface names word-by-word explicitly after generating the entities. During training, LASER refines the label semantics by updating the label surface name representations and also strengthens the label-region correlation. In this way, LASER recognizes the entities from document images through both semantic and layout correspondence. Extensive experiments on two benchmark datasets demonstrate the superiority of LASER under the few-shot setting.
null
null
10.18653/v1/2022.findings-acl.329
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,254
inproceedings
jiang-etal-2022-length
On Length Divergence Bias in Textual Matching Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.330/
Jiang, Lan and Lyu, Tianshu and Lin, Yankai and Chong, Meng and Lyu, Xiaoyong and Yin, Dawei
Findings of the Association for Computational Linguistics: ACL 2022
4187--4193
Despite the remarkable success deep models have achieved in Textual Matching (TM) tasks, it still remains unclear whether they truly understand language or measure the semantic similarity of texts by exploiting statistical bias in datasets. In this work, we provide a new perspective to study this issue {---} via the length divergence bias. We find the length divergence heuristic widely exists in prevalent TM datasets, providing direct cues for prediction. To determine whether TM models have adopted such heuristic, we introduce an adversarial evaluation scheme which invalidates the heuristic. In this adversarial setting, all TM models perform worse, indicating they have indeed adopted this heuristic. Through a well-designed probing experiment, we empirically validate that the bias of TM models can be attributed in part to extracting the text length information during training. To alleviate the length divergence bias, we propose an adversarial training method. The results demonstrate we successfully improve the robustness and generalization ability of models at the same time.
null
null
10.18653/v1/2022.findings-acl.330
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,255
inproceedings
ghazarian-etal-2022-wrong
What is wrong with you?: Leveraging User Sentiment for Automatic Dialog Evaluation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.331/
Ghazarian, Sarik and Hedayatnia, Behnam and Papangelis, Alexandros and Liu, Yang and Hakkani-Tur, Dilek
Findings of the Association for Computational Linguistics: ACL 2022
4194--4204
Accurate automatic evaluation metrics for open-domain dialogs are in high demand. Existing model-based metrics for system response evaluation are trained on human annotated data, which is cumbersome to collect. In this work, we propose to use information that can be automatically extracted from the next user utterance, such as its sentiment or whether the user explicitly ends the conversation, as a proxy to measure the quality of the previous system response. This allows us to train on a massive set of dialogs with weak supervision, without requiring manual system turn quality annotations. Experiments show that our model is comparable to models trained on human annotated data. Furthermore, our model generalizes across both spoken and written open-domain dialog corpora collected from real and paid users.
null
null
10.18653/v1/2022.findings-acl.331
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,256
inproceedings
akhtar-etal-2022-pubhealthtab
{P}ub{H}ealth{T}ab: {A} Public Health Table-based Dataset for Evidence-based Fact Checking
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.1/
Akhtar, Mubashara and Cocarascu, Oana and Simperl, Elena
Findings of the Association for Computational Linguistics: NAACL 2022
1--16
Inspired by human fact checkers, who use different types of evidence (e.g. tables, images, audio) in addition to text, several datasets with tabular evidence data have been released in recent years. Whilst the datasets encourage research on table fact-checking, they rely on information from restricted data sources, such as Wikipedia for creating claims and extracting evidence data, making the fact-checking process different from the real-world process used by fact checkers. In this paper, we introduce PubHealthTab, a table fact-checking dataset based on real world public health claims and noisy evidence tables from sources similar to those used by real fact checkers. We outline our approach for collecting evidence data from various websites and present an in-depth analysis of our dataset. Finally, we evaluate state-of-the-art table representation and pre-trained models fine-tuned on our dataset, achieving an overall $F_1$ score of 0.73.
null
null
10.18653/v1/2022.findings-naacl.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,258
inproceedings
spokoyny-etal-2022-masked
Masked Measurement Prediction: Learning to Jointly Predict Quantities and Units from Textual Context
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.2/
Spokoyny, Daniel and Lee, Ivan and Jin, Zhao and Berg-Kirkpatrick, Taylor
Findings of the Association for Computational Linguistics: NAACL 2022
17--29
Physical measurements constitute a large portion of numbers in academic papers, engineering reports, and web tables. Current benchmarks fall short of properly evaluating numeracy of pretrained language models on measurements, hindering research on developing new methods and applying them to numerical tasks. To that end, we introduce a novel task, Masked Measurement Prediction (MMP), where a model learns to reconstruct a number together with its associated unit given masked text. MMP is useful for both training new numerically informed models as well as evaluating numeracy of existing systems. To address this task, we introduce a new Generative Masked Measurement (GeMM) model that jointly learns to predict numbers along with their units. We perform fine-grained analyses comparing our model with various ablations and baselines. We use linear probing of traditional pretrained transformer models (RoBERTa) to show that they significantly underperform jointly trained number-unit models, highlighting the difficulty of this new task and the benefits of our proposed pretraining approach. We hope this framework accelerates the progress towards building more robust numerical reasoning systems in the future.
null
null
10.18653/v1/2022.findings-naacl.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,259
inproceedings
zhang-etal-2022-promptgen
{P}rompt{G}en: Automatically Generate Prompts using Generative Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.3/
Zhang, Yue and Fei, Hongliang and Li, Dingcheng and Li, Ping
Findings of the Association for Computational Linguistics: NAACL 2022
30--37
Recently, prompt learning has received significant attention, where the downstream tasks are reformulated to the mask-filling task with the help of a textual prompt. The key point of prompt learning is finding the most appropriate prompt. This paper proposes a novel model PromptGen, which can automatically generate prompts conditional on the input sentence. PromptGen is the first work considering dynamic prompt generation for knowledge probing, based on a pre-trained generative model. To mitigate any label information leaking from the pre-trained generative model, when given a generated prompt, we replace the query input with {\textquotedblleft}None{\textquotedblright}. We pursue that this perturbed context-free prompt cannot trigger the correct label. We evaluate our model on the knowledge probing LAMA benchmark, and show that PromptGen significantly outperforms other baselines.
null
null
10.18653/v1/2022.findings-naacl.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,260
inproceedings
yang-etal-2022-improving
Improving Conversational Recommendation Systems' Quality with Context-Aware Item Meta-Information
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.4/
Yang, Bowen and Han, Cong and Li, Yu and Zuo, Lei and Yu, Zhou
Findings of the Association for Computational Linguistics: NAACL 2022
38--48
A key challenge of Conversational Recommendation Systems (CRS) is to integrate the recommendation function and the dialog generation function smoothly. Previous works employ graph neural networks with external knowledge graphs (KG) to model individual recommendation items and integrate KGs with language models through attention mechanism for response generation. Although previous approaches prove effective, there is still room for improvement. For example, KG-based approaches only rely on entity relations and bag-of-words to recommend items and neglect the information in the conversational context. We propose to improve the usage of dialog context for both recommendation and response generation using an encoding architecture along with the self-attention mechanism of transformers. In this paper, we propose a simple yet effective architecture comprising a pre-trained language model (PLM) and an item metadata encoder to integrate the recommendation and the dialog generation better. The proposed item encoder learns to map item metadata to embeddings reflecting the rich information of the item, which can be matched with dialog context. The PLM then consumes the context-aware item embeddings and dialog context to generate high-quality recommendations and responses. Experimental results on the benchmark dataset ReDial show that our model obtains state-of-the-art results on both recommendation and response generation tasks.
null
null
10.18653/v1/2022.findings-naacl.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,261
inproceedings
yang-etal-2022-seqzero
{SEQZERO}: Few-shot Compositional Semantic Parsing with Sequential Prompts and Zero-shot Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.5/
Yang, Jingfeng and Jiang, Haoming and Yin, Qingyu and Zhang, Danqing and Yin, Bing and Yang, Diyi
Findings of the Association for Computational Linguistics: NAACL 2022
49--60
Recent research showed promising results on combining pretrained language models (LMs) with canonical utterance for few-shot semantic parsing. The canonical utterance is often lengthy and complex due to the compositional structure of formal languages. Learning to generate such canonical utterance requires significant amount of data to reach high performance. Fine-tuning with only few-shot samples, the LMs can easily forget pretrained knowledge, overfit spurious biases, and suffer from compositionally out-of-distribution generalization errors. To tackle these issues, we propose a novel few-shot semantic parsing method {--} SEQZERO. SEQZERO decomposes the problem into a sequence of sub-problems, which corresponds to the sub-clauses of the formal language. Based on the decomposition, the LMs only need to generate short answers using prompts for predicting sub-clauses. Thus, SEQZERO avoids generating a long canonical utterance at once. Moreover, SEQZERO employs not only a few-shot model but also a zero-shot model to alleviate the overfitting.In particular, SEQZERO brings out the merits from both models via ensemble equipped with our proposed constrained rescaling.SEQZERO achieves SOTA performance of BART-based models on GeoQuery and EcommerceQuery, which are two few-shot datasets with compositional data split.
null
null
10.18653/v1/2022.findings-naacl.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,262
inproceedings
wadden-etal-2022-multivers
{M}ulti{V}er{S}: Improving scientific claim verification with weak supervision and full-document context
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.6/
Wadden, David and Lo, Kyle and Wang, Lucy Lu and Cohan, Arman and Beltagy, Iz and Hajishirzi, Hannaneh
Findings of the Association for Computational Linguistics: NAACL 2022
61--76
The scientific claim verification task requires an NLP system to label scientific documents which Support or Refute an input claim, and to select evidentiary sentences (or rationales) justifying each predicted label. In this work, we present MultiVerS, which predicts a fact-checking label and identifies rationales in a multitask fashion based on a shared encoding of the claim and full document context. This approach accomplishes two key modeling goals. First, it ensures that all relevant contextual information is incorporated into each labeling decision. Second, it enables the model to learn from instances annotated with a document-level fact-checking label, but lacking sentence-level rationales. This allows MultiVerS to perform weakly-supervised domain adaptation by training on scientific documents labeled using high-precision heuristics. Our approach outperforms two competitive baselines on three scientific claim verification datasets, with particularly strong performance in zero / few-shot domain adaptation experiments. Our code and data are available at \url{https://github.com/dwadden/multivers}.
null
null
10.18653/v1/2022.findings-naacl.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,263
inproceedings
kornilova-etal-2022-item
An Item Response Theory Framework for Persuasion
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.7/
Kornilova, Anastassia and Eidelman, Vladimir and Douglass, Daniel
Findings of the Association for Computational Linguistics: NAACL 2022
77--86
In this paper, we apply Item Response Theory, popular in education and political science research, to the analysis of argument persuasiveness in language. We empirically evaluate the model`s performance on three datasets, including a novel dataset in the area of political advocacy. We show the advantages of separating these components under several style and content representations, including evaluating the ability of the speaker embeddings generated by the model to parallel real-world observations about persuadability.
null
null
10.18653/v1/2022.findings-naacl.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,264
inproceedings
meng-etal-2022-self
Self-Supervised Contrastive Learning with Adversarial Perturbations for Defending Word Substitution-based Attacks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.8/
Meng, Zhao and Dong, Yihan and Sachan, Mrinmaya and Wattenhofer, Roger
Findings of the Association for Computational Linguistics: NAACL 2022
87--101
In this paper, we present an approach to improve the robustness of BERT language models against word substitution-based adversarial attacks by leveraging adversarial perturbations for self-supervised contrastive learning. We create a word-level adversarial attack generating hard positives on-the-fly as adversarial examples during contrastive learning. In contrast to previous works, our method improves model robustness without using any labeled data. Experimental results show that our method improves robustness of BERT against four different word substitution-based adversarial attacks, and combining our method with adversarial training gives higher robustness than adversarial training alone. As our method improves the robustness of BERT purely with unlabeled data, it opens up the possibility of using large text datasets to train robust language models against word substitution-based adversarial attacks.
null
null
10.18653/v1/2022.findings-naacl.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,265