entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
harel-canada-etal-2022-sibylvariant
Sibylvariant Transformations for Robust Text Classification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.140/
Harel-Canada, Fabrice and Gulzar, Muhammad Ali and Peng, Nanyun and Kim, Miryung
Findings of the Association for Computational Linguistics: ACL 2022
1771--1788
The vast majority of text transformation techniques in NLP are inherently limited in their ability to expand input space coverage due to an implicit constraint to preserve the original class label. In this work, we propose the notion of sibylvariance (SIB) to describe the broader set of transforms that relax the label-preserving constraint, knowably vary the expected class, and lead to significantly more diverse input distributions. We offer a unified framework to organize all data transformations, including two types of SIB: (1) Transmutations convert one discrete kind into another, (2) Mixture Mutations blend two or more classes together. To explore the role of sibylvariance within NLP, we implemented 41 text transformations, including several novel techniques like Concept2Sentence and SentMix. Sibylvariance also enables a unique form of adaptive training that generates new input mixtures for the most confused class pairs, challenging the learner to differentiate with greater nuance. Our experiments on six benchmark datasets strongly support the efficacy of sibylvariance for generalization performance, defect detection, and adversarial robustness.
null
null
10.18653/v1/2022.findings-acl.140
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,065
inproceedings
park-etal-2022-dalc
{D}a{LC}: Domain Adaptation Learning Curve Prediction for Neural Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.141/
Park, Cheonbok and Kim, Hantae and Calapodescu, Ioan and Cho, Hyun Chang and Nikoulina, Vassilina
Findings of the Association for Computational Linguistics: ACL 2022
1789--1807
Domain Adaptation (DA) of Neural Machine Translation (NMT) model often relies on a pre-trained general NMT model which is adapted to the new domain on a sample of in-domain parallel data. Without parallel data, there is no way to estimate the potential benefit of DA, nor the amount of parallel samples it would require. It is however a desirable functionality that could help MT practitioners to make an informed decision before investing resources in dataset creation. We propose a Domain adaptation Learning Curve prediction (DaLC) model that predicts prospective DA performance based on in-domain monolingual samples in the source language. Our model relies on the NMT encoder representations combined with various instance and corpus-level features. We demonstrate that instance-level is better able to distinguish between different domains compared to corpus-level frameworks proposed in previous studies Finally, we perform in-depth analyses of the results highlighting the limitations of our approach, and provide directions for future research.
null
null
10.18653/v1/2022.findings-acl.141
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,066
inproceedings
khot-etal-2022-hey
Hey {AI}, Can You Solve Complex Tasks by Talking to Agents?
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.142/
Khot, Tushar and Richardson, Kyle and Khashabi, Daniel and Sabharwal, Ashish
Findings of the Association for Computational Linguistics: ACL 2022
1808--1823
Training giant models from scratch for each complex task is resource- and data-inefficient. To help develop models that can leverage existing systems, we propose a new challenge: Learning to solve complex tasks by communicating with existing agents (or models) in natural language. We design a synthetic benchmark, CommaQA, with three complex reasoning tasks (explicit, implicit, numeric) designed to be solved by communicating with existing QA agents. For instance, using text and table QA agents to answer questions such as {\textquotedblleft}Who had the longest javelin throw from USA?{\textquotedblright}. We show that black-box models struggle to learn this task from scratch (accuracy under 50{\%}) even with access to each agent`s knowledge and gold facts supervision. In contrast, models that learn to communicate with agents outperform black-box models, reaching scores of 100{\%} when given gold decomposition supervision. However, we show that the challenge of learning to solve complex tasks by communicating with existing agents \textit{without relying on any auxiliary supervision or data} still remains highly elusive. We will release CommaQA, along with a compositional generalization test split, to advance research in this direction.
null
null
10.18653/v1/2022.findings-acl.142
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,067
inproceedings
yao-mihalcea-2022-modality
Modality-specific Learning Rates for Effective Multimodal Additive Late-fusion
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.143/
Yao, Yiqun and Mihalcea, Rada
Findings of the Association for Computational Linguistics: ACL 2022
1824--1834
In multimodal machine learning, additive late-fusion is a straightforward approach to combine the feature representations from different modalities, in which the final prediction can be formulated as the sum of unimodal predictions. While it has been found that certain late-fusion models can achieve competitive performance with lower computational costs compared to complex multimodal interactive models, how to effectively search for a good late-fusion model is still an open question. Moreover, for different modalities, the best unimodal models may work under significantly different learning rates due to the nature of the modality and the computational flow of the model; thus, selecting a global learning rate for late-fusion models can result in a vanishing gradient for some modalities. To help address these issues, we propose a Modality-Specific Learning Rate (MSLR) method to effectively build late-fusion multimodal models from fine-tuned unimodal models. We investigate three different strategies to assign learning rates to different modalities. Our experiments show that MSLR outperforms global learning rates on multiple tasks and settings, and enables the models to effectively learn each modality.
null
null
10.18653/v1/2022.findings-acl.143
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,068
inproceedings
liang-etal-2022-bisyn
{B}i{S}yn-{GAT}+: Bi-Syntax Aware Graph Attention Network for Aspect-based Sentiment Analysis
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.144/
Liang, Shuo and Wei, Wei and Mao, Xian-Ling and Wang, Fei and He, Zhiyong
Findings of the Association for Computational Linguistics: ACL 2022
1835--1848
Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis task that aims to align aspects and corresponding sentiments for aspect-specific sentiment polarity inference. It is challenging because a sentence may contain multiple aspects or complicated (e.g., conditional, coordinating, or adversative) relations. Recently, exploiting dependency syntax information with graph neural networks has been the most popular trend. Despite its success, methods that heavily rely on the dependency tree pose challenges in accurately modeling the alignment of the aspects and their words indicative of sentiment, since the dependency tree may provide noisy signals of unrelated associations (e.g., the {\textquotedblleft}conj{\textquotedblright} relation between {\textquotedblleft}great{\textquotedblright} and {\textquotedblleft}dreadful{\textquotedblright} in Figure 2). In this paper, to alleviate this problem, we propose a Bi-Syntax aware Graph Attention Network (BiSyn-GAT+). Specifically, BiSyn-GAT+ fully exploits the syntax information (e.g., phrase segmentation and hierarchical structure) of the constituent tree of a sentence to model the sentiment-aware context of every single aspect (called intra-context) and the sentiment relations across aspects (called inter-context) for learning. Experiments on four benchmark datasets demonstrate that BiSyn-GAT+ outperforms the state-of-the-art methods consistently.
null
null
10.18653/v1/2022.findings-acl.144
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,069
inproceedings
dabre-etal-2022-indicbart
{I}ndic{BART}: A Pre-trained Model for Indic Natural Language Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.145/
Dabre, Raj and Shrotriya, Himani and Kunchukuttan, Anoop and Puduppully, Ratish and Khapra, Mitesh and Kumar, Pratyush
Findings of the Association for Computational Linguistics: ACL 2022
1849--1863
In this paper, we study pre-trained sequence-to-sequence models for a group of related languages, with a focus on Indic languages. We present IndicBART, a multilingual, sequence-to-sequence pre-trained model focusing on 11 Indic languages and English. IndicBART utilizes the orthographic similarity between Indic scripts to improve transfer learning between similar Indic languages. We evaluate IndicBART on two NLG tasks: Neural Machine Translation (NMT) and extreme summarization. Our experiments on NMT and extreme summarization show that a model specific to related languages like IndicBART is competitive with large pre-trained models like mBART50 despite being significantly smaller. It also performs well on very low-resource translation scenarios where languages are not included in pre-training or fine-tuning. Script sharing, multilingual training, and better utilization of limited model capacity contribute to the good performance of the compact IndicBART model.
null
null
10.18653/v1/2022.findings-acl.145
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,070
inproceedings
ni-etal-2022-sentence
Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.146/
Ni, Jianmo and Hernandez Abrego, Gustavo and Constant, Noah and Ma, Ji and Hall, Keith and Cer, Daniel and Yang, Yinfei
Findings of the Association for Computational Linguistics: ACL 2022
1864--1874
We provide the first exploration of sentence embeddings from text-to-text transformers (T5) including the effects of scaling up sentence encoders to 11B parameters. Sentence embeddings are broadly useful for language processing tasks. While T5 achieves impressive performance on language tasks, it is unclear how to produce sentence embeddings from encoder-decoder models. We investigate three methods to construct Sentence-T5 (ST5) models: two utilize only the T5 encoder and one using the full T5 encoder-decoder. We establish a new sentence representation transfer benchmark, SentGLUE, which extends the SentEval toolkit to nine tasks from the GLUE benchmark. Our encoder-only models outperform the previous best models on both SentEval and SentGLUE transfer tasks, including semantic textual similarity (STS). Scaling up ST5 from millions to billions of parameters shown to consistently improve performance. Finally, our encoder-decoder method achieves a new state-of-the-art on STS when using sentence embeddings.
null
null
10.18653/v1/2022.findings-acl.146
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,071
inproceedings
tian-etal-2022-improving
Improving Relation Extraction through Syntax-induced Pre-training with Dependency Masking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.147/
Tian, Yuanhe and Song, Yan and Xia, Fei
Findings of the Association for Computational Linguistics: ACL 2022
1875--1886
Relation extraction (RE) is an important natural language processing task that predicts the relation between two given entities, where a good understanding of the contextual information is essential to achieve an outstanding model performance. Among different types of contextual information, the auto-generated syntactic information (namely, word dependencies) has shown its effectiveness for the task. However, most existing studies require modifications to the existing baseline architectures (e.g., adding new components, such as GCN, on the top of an encoder) to leverage the syntactic information. To offer an alternative solution, we propose to leverage syntactic information to improve RE by training a syntax-induced encoder on auto-parsed data through dependency masking. Specifically, the syntax-induced encoder is trained by recovering the masked dependency connections and types in first, second, and third orders, which significantly differs from existing studies that train language models or word embeddings by predicting the context words along the dependency paths. Experimental results on two English benchmark datasets, namely, ACE2005EN and SemEval 2010 Task 8 datasets, demonstrate the effectiveness of our approach for RE, where our approach outperforms strong baselines and achieve state-of-the-art results on both datasets.
null
null
10.18653/v1/2022.findings-acl.147
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,072
inproceedings
kumar-joshi-2022-striking
Striking a Balance: Alleviating Inconsistency in Pre-trained Models for Symmetric Classification Tasks
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.148/
Kumar, Ashutosh and Joshi, Aditya
Findings of the Association for Computational Linguistics: ACL 2022
1887--1895
While fine-tuning pre-trained models for downstream classification is the conventional paradigm in NLP, often task-specific nuances may not get captured in the resultant models. Specifically, for tasks that take two inputs and require the output to be invariant of the order of the inputs, inconsistency is often observed in the predicted labels or confidence scores. We highlight this model shortcoming and apply a consistency loss function to alleviate inconsistency in symmetric classification. Our results show an improved consistency in predictions for three paraphrase detection datasets without a significant drop in the accuracy scores. We examine the classification performance of six datasets (both symmetric and non-symmetric) to showcase the strengths and limitations of our approach.
null
null
10.18653/v1/2022.findings-acl.148
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,073
inproceedings
yu-etal-2022-diversifying
Diversifying Content Generation for Commonsense Reasoning with Mixture of Knowledge Graph Experts
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.149/
Yu, Wenhao and Zhu, Chenguang and Qin, Lianhui and Zhang, Zhihan and Zhao, Tong and Jiang, Meng
Findings of the Association for Computational Linguistics: ACL 2022
1896--1906
Generative commonsense reasoning (GCR) in natural language is to reason about the commonsense while generating coherent text. Recent years have seen a surge of interest in improving the generation quality of commonsense reasoning tasks. Nevertheless, these approaches have seldom investigated diversity in the GCR tasks, which aims to generate alternative explanations for a real-world situation or predict all possible outcomes. Diversifying GCR is challenging as it expects to generate multiple outputs that are not only semantically different but also grounded in commonsense knowledge. In this paper, we propose MoKGE, a novel method that diversifies the generative reasoning by a mixture of expert (MoE) strategy on commonsense knowledge graphs (KG). A set of knowledge experts seek diverse reasoning on KG to encourage various generation outputs. Empirical experiments demonstrated that MoKGE can significantly improve the diversity while achieving on par performance on accuracy on two GCR benchmarks, based on both automatic and human evaluations.
null
null
10.18653/v1/2022.findings-acl.149
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,074
inproceedings
yu-etal-2022-dict
Dict-{BERT}: Enhancing Language Model Pre-training with Dictionary
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.150/
Yu, Wenhao and Zhu, Chenguang and Fang, Yuwei and Yu, Donghan and Wang, Shuohang and Xu, Yichong and Zeng, Michael and Jiang, Meng
Findings of the Association for Computational Linguistics: ACL 2022
1907--1918
Pre-trained language models (PLMs) aim to learn universal language representations by conducting self-supervised training tasks on large-scale corpora. Since PLMs capture word semantics in different contexts, the quality of word representations highly depends on word frequency, which usually follows a heavy-tailed distributions in the pre-training corpus. Therefore, the embeddings of rare words on the tail are usually poorly optimized. In this work, we focus on enhancing language model pre-training by leveraging definitions of the rare words in dictionaries (e.g., Wiktionary). To incorporate a rare word definition as a part of input, we fetch its definition from the dictionary and append it to the end of the input text sequence. In addition to training with the masked language modeling objective, we propose two novel self-supervised pre-training tasks on word and sentence-level alignment between input text sequence and rare word definitions to enhance language modeling representation with dictionary. We evaluate the proposed Dict-BERT model on the language understanding benchmark GLUE and eight specialized domain benchmark datasets. Extensive experiments demonstrate that Dict-BERT can significantly improve the understanding of rare words and boost model performance on various NLP downstream tasks.
null
null
10.18653/v1/2022.findings-acl.150
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,075
inproceedings
dugan-etal-2022-feasibility
A Feasibility Study of Answer-Agnostic Question Generation for Education
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.151/
Dugan, Liam and Miltsakaki, Eleni and Upadhyay, Shriyash and Ginsberg, Etan and Gonzalez, Hannah and Choi, DaHyeon and Yuan, Chuning and Callison-Burch, Chris
Findings of the Association for Computational Linguistics: ACL 2022
1919--1926
We conduct a feasibility study into the applicability of answer-agnostic question generation models to textbook passages. We show that a significant portion of errors in such systems arise from asking irrelevant or un-interpretable questions and that such errors can be ameliorated by providing summarized input. We find that giving these models human-written summaries instead of the original text results in a significant increase in acceptability of generated questions (33{\%} $\rightarrow$ 83{\%}) as determined by expert annotators. We also find that, in the absence of human-written summaries, automatic summarization can serve as a good middle ground.
null
null
10.18653/v1/2022.findings-acl.151
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,076
inproceedings
zheng-kordjamshidi-2022-relevant
Relevant {C}ommon{S}ense Subgraphs for {\textquotedblleft}What if...{\textquotedblright} Procedural Reasoning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.152/
Zheng, Chen and Kordjamshidi, Parisa
Findings of the Association for Computational Linguistics: ACL 2022
1927--1933
We study the challenge of learning causal reasoning over procedural text to answer {\textquotedblleft}What if...{\textquotedblright} questions when external commonsense knowledge is required. We propose a novel multi-hop graph reasoning model to 1) efficiently extract a commonsense subgraph with the most relevant information from a large knowledge graph; 2) predict the causal answer by reasoning over the representations obtained from the commonsense subgraph and the contextual interactions between the questions and context. We evaluate our model on WIQA benchmark and achieve state-of-the-art performance compared to the recent models.
null
null
10.18653/v1/2022.findings-acl.152
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,077
inproceedings
pezeshkpour-etal-2022-combining
Combining Feature and Instance Attribution to Detect Artifacts
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.153/
Pezeshkpour, Pouya and Jain, Sarthak and Singh, Sameer and Wallace, Byron
Findings of the Association for Computational Linguistics: ACL 2022
1934--1946
Training the deep neural networks that dominate NLP requires large datasets. These are often collected automatically or via crowdsourcing, and may exhibit systematic biases or annotation artifacts. By the latter we mean spurious correlations between inputs and outputs that do not represent a generally held causal relationship between features and classes; models that exploit such correlations may appear to perform a given task well, but fail on out of sample data. In this paper, we evaluate use of different attribution methods for aiding identification of training data artifacts. We propose new hybrid approaches that combine saliency maps (which highlight important input features) with instance attribution methods (which retrieve training samples influential to a given prediction). We show that this proposed training-feature attribution can be used to efficiently uncover artifacts in training data when a challenging validation set is available. We also carry out a small user study to evaluate whether these methods are useful to NLP researchers in practice, with promising results. We make code for all methods and experiments in this paper available.
null
null
10.18653/v1/2022.findings-acl.153
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,078
inproceedings
reich-etal-2022-leveraging
Leveraging Expert Guided Adversarial Augmentation For Improving Generalization in Named Entity Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.154/
Reich, Aaron and Chen, Jiaao and Agrawal, Aastha and Zhang, Yanzhe and Yang, Diyi
Findings of the Association for Computational Linguistics: ACL 2022
1947--1955
Named Entity Recognition (NER) systems often demonstrate great performance on in-distribution data, but perform poorly on examples drawn from a shifted distribution. One way to evaluate the generalization ability of NER models is to use adversarial examples, on which the specific variations associated with named entities are rarely considered. To this end, we propose leveraging expert-guided heuristics to change the entity tokens and their surrounding contexts thereby altering their entity types as adversarial attacks. Using expert-guided heuristics, we augmented the CoNLL 2003 test set and manually annotated it to construct a high-quality challenging set. We found that state-of-the-art NER systems trained on CoNLL 2003 training data drop performance dramatically on our challenging set. By training on adversarial augmented training examples and using mixup for regularization, we were able to significantly improve the performance on the challenging set as well as improve out-of-domain generalization which we evaluated by using OntoNotes data. We have publicly released our dataset and code at \url{https://github.com/GT-SALT/Guided-Adversarial-Augmentation}.
null
null
10.18653/v1/2022.findings-acl.154
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,079
inproceedings
ma-etal-2022-label
Label Semantics for Few Shot Named Entity Recognition
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.155/
Ma, Jie and Ballesteros, Miguel and Doss, Srikanth and Anubhai, Rishita and Mallya, Sunil and Al-Onaizan, Yaser and Roth, Dan
Findings of the Association for Computational Linguistics: ACL 2022
1956--1971
We study the problem of few shot learning for named entity recognition. Specifically, we leverage the semantic information in the names of the labels as a way of giving the model additional signal and enriched priors. We propose a neural architecture that consists of two BERT encoders, one to encode the document and its tokens and another one to encode each of the labels in natural language format. Our model learns to match the representations of named entities computed by the first encoder with label representations computed by the second encoder. The label semantics signal is shown to support improved state-of-the-art results in multiple few shot NER benchmarks and on-par performance in standard benchmarks. Our model is especially effective in low resource settings.
null
null
10.18653/v1/2022.findings-acl.155
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,080
inproceedings
mrini-etal-2022-detection
Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.156/
Mrini, Khalil and Nie, Shaoliang and Gu, Jiatao and Wang, Sinong and Sanjabi, Maziar and Firooz, Hamed
Findings of the Association for Computational Linguistics: ACL 2022
1972--1983
We propose an autoregressive entity linking model, that is trained with two auxiliary tasks, and learns to re-rank generated samples at inference time. Our proposed novelties address two weaknesses in the literature. First, a recent method proposes to learn mention detection and then entity candidate selection, but relies on predefined sets of candidates. We use encoder-decoder autoregressive entity linking in order to bypass this need, and propose to train mention detection as an auxiliary task instead. Second, previous work suggests that re-ranking could help correct prediction errors. We add a new, auxiliary task, match prediction, to learn re-ranking. Without the use of a knowledge base or candidate sets, our model sets a new state of the art in two benchmark datasets of entity linking: COMETA in the biomedical domain, and AIDA-CoNLL in the news domain. We show through ablation studies that each of the two auxiliary tasks increases performance, and that re-ranking is an important factor to the increase. Finally, our low-resource experimental results suggest that performance on the main task benefits from the knowledge learned by the auxiliary tasks, and not just from the additional training data.
null
null
10.18653/v1/2022.findings-acl.156
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,081
inproceedings
shrivastava-etal-2022-visitron
{VISITRON}: Visual Semantics-Aligned Interactively Trained Object-Navigator
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.157/
Shrivastava, Ayush and Gopalakrishnan, Karthik and Liu, Yang and Piramuthu, Robinson and Tur, Gokhan and Parikh, Devi and Hakkani-Tur, Dilek
Findings of the Association for Computational Linguistics: ACL 2022
1984--1994
Interactive robots navigating photo-realistic environments need to be trained to effectively leverage and handle the dynamic nature of dialogue in addition to the challenges underlying vision-and-language navigation (VLN). In this paper, we present VISITRON, a multi-modal Transformer-based navigator better suited to the interactive regime inherent to Cooperative Vision-and-Dialog Navigation (CVDN). VISITRON is trained to: i) identify and associate object-level concepts and semantics between the environment and dialogue history, ii) identify when to interact vs. navigate via imitation learning of a binary classification head. We perform extensive pre-training and fine-tuning ablations with VISITRON to gain empirical insights and improve performance on CVDN. VISITRON`s ability to identify when to interact leads to a natural generalization of the game-play mode introduced by Roman et al. (2020) for enabling the use of such models in different environments. VISITRON is competitive with models on the static CVDN leaderboard and attains state-of-the-art performance on the Success weighted by Path Length (SPL) metric.
null
null
10.18653/v1/2022.findings-acl.157
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,082
inproceedings
varshney-etal-2022-investigating
Investigating Selective Prediction Approaches Across Several Tasks in {IID}, {OOD}, and Adversarial Settings
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.158/
Varshney, Neeraj and Mishra, Swaroop and Baral, Chitta
Findings of the Association for Computational Linguistics: ACL 2022
1995--2002
In order to equip NLP systems with {\textquoteleft}selective prediction' capability, several task-specific approaches have been proposed. However, which approaches work best across tasks or even if they consistently outperform the simplest baseline MaxProb remains to be explored. To this end, we systematically study selective prediction in a large-scale setup of 17 datasets across several NLP tasks. Through comprehensive experiments under in-domain (IID), out-of-domain (OOD), and adversarial (ADV) settings, we show that despite leveraging additional resources (held-out data/computation), none of the existing approaches consistently and considerably outperforms MaxProb in all three settings. Furthermore, their performance does not translate well across tasks. For instance, Monte-Carlo Dropout outperforms all other approaches on Duplicate Detection datasets but does not fare well on NLI datasets, especially in the OOD setting. Thus, we recommend that future selective prediction approaches should be evaluated across tasks and settings for reliable estimation of their capabilities.
null
null
10.18653/v1/2022.findings-acl.158
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,083
inproceedings
varshney-etal-2022-unsupervised
Unsupervised Natural Language Inference Using {PHL} Triplet Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.159/
Varshney, Neeraj and Banerjee, Pratyay and Gokhale, Tejas and Baral, Chitta
Findings of the Association for Computational Linguistics: ACL 2022
2003--2016
Transformer-based models achieve impressive performance on numerous Natural Language Inference (NLI) benchmarks when trained on respective training datasets. However, in certain cases, training samples may not be available or collecting them could be time-consuming and resource-intensive. In this work, we address the above challenge and present an explorative study on unsupervised NLI, a paradigm in which no human-annotated training samples are available. We investigate it under three settings: PH, P, and NPH that differ in the extent of unlabeled data available for learning. As a solution, we propose a procedural data generation approach that leverages a set of sentence transformations to collect PHL (Premise, Hypothesis, Label) triplets for training NLI models, bypassing the need for human-annotated training data. Comprehensive experiments with several NLI datasets show that the proposed approach results in accuracies of up to 66.75{\%}, 65.9{\%}, 65.39{\%} in PH, P, and NPH settings respectively, outperforming all existing unsupervised baselines. Furthermore, fine-tuning our model with as little as {\textasciitilde}0.1{\%} of the human-annotated training dataset (500 instances) leads to 12.2{\%} higher accuracy than the model trained from scratch on the same 500 instances. Supported by this superior performance, we conclude with a recommendation for collecting high-quality task-specific data.
null
null
10.18653/v1/2022.findings-acl.159
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,084
inproceedings
razumovskaia-etal-2022-data
Data Augmentation and Learned Layer Aggregation for Improved Multilingual Language Understanding in Dialogue
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.160/
Razumovskaia, Evgeniia and Vuli{\'c}, Ivan and Korhonen, Anna
Findings of the Association for Computational Linguistics: ACL 2022
2017--2033
Scaling dialogue systems to a multitude of domains, tasks and languages relies on costly and time-consuming data annotation for different domain-task-language configurations. The annotation efforts might be substantially reduced by the methods that generalise well in zero- and few-shot scenarios, and also effectively leverage external unannotated data sources (e.g., Web-scale corpora). We propose two methods to this aim, offering improved dialogue natural language understanding (NLU) across multiple languages: 1) Multi-SentAugment, and 2) LayerAgg. Multi-SentAugment is a self-training method which augments available (typically few-shot) training data with similar (automatically labelled) in-domain sentences from large monolingual Web-scale corpora. LayerAgg learns to select and combine useful semantic information scattered across different layers of a Transformer model (e.g., mBERT); it is especially suited for zero-shot scenarios as semantically richer representations should strengthen the model`s cross-lingual capabilities. Applying the two methods with state-of-the-art NLU models obtains consistent improvements across two standard multilingual NLU datasets covering 16 diverse languages. The gains are observed in zero-shot, few-shot, and even in full-data scenarios. The results also suggest that the two methods achieve a synergistic effect: the best overall performance in few-shot setups is attained when the methods are used together.
null
null
10.18653/v1/2022.findings-acl.160
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,085
inproceedings
wang-etal-2022-ranking
Ranking-Constrained Learning with Rationales for Text Classification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.161/
Wang, Juanyan and Sharma, Manali and Bilgic, Mustafa
Findings of the Association for Computational Linguistics: ACL 2022
2034--2046
We propose a novel approach that jointly utilizes the labels and elicited rationales for text classification to speed up the training of deep learning models with limited training data. We define and optimize a ranking-constrained loss function that combines cross-entropy loss with ranking losses as rationale constraints. We evaluate our proposed rationale-augmented learning approach on three human-annotated datasets, and show that our approach provides significant improvements over classification approaches that do not utilize rationales as well as other state-of-the-art rationale-augmented baselines.
null
null
10.18653/v1/2022.findings-acl.161
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,086
inproceedings
goyal-etal-2022-cam
{C}a{M}-{G}en: {C}ausally Aware Metric-Guided Text Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.162/
Goyal, Navita and Paneri, Roodram and Agarwal, Ayush and Kalani, Udit and Sancheti, Abhilasha and Chhaya, Niyati
Findings of the Association for Computational Linguistics: ACL 2022
2047--2060
Content is created for a well-defined purpose, often described by a metric or signal represented in the form of structured information. The relationship between the goal (metrics) of target content and the content itself is non-trivial. While large-scale language models show promising text generation capabilities, guiding the generated text with external metrics is challenging. These metrics and content tend to have inherent relationships and not all of them may be of consequence. We introduce CaM-Gen: Causally aware Generative Networks guided by user-defined target metrics incorporating the causal relationships between the metric and content features. We leverage causal inference techniques to identify causally significant aspects of a text that lead to the target metric and then explicitly guide generative models towards these by a feedback mechanism. We propose this mechanism for variational autoencoder and Transformer-based generative models. The proposed models beat baselines in terms of the target metric control while maintaining fluency and language quality of the generated text. To the best of our knowledge, this is one of the early attempts at controlled generation incorporating a metric guide using causal inference.
null
null
10.18653/v1/2022.findings-acl.162
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,087
inproceedings
goyal-etal-2022-training
Training Dynamics for Text Summarization Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.163/
Goyal, Tanya and Xu, Jiacheng and Li, Junyi Jessy and Durrett, Greg
Findings of the Association for Computational Linguistics: ACL 2022
2061--2073
Pre-trained language models (e.g. BART) have shown impressive results when fine-tuned on large summarization datasets. However, little is understood about this fine-tuning process, including what knowledge is retained from pre-training time or how content selection and generation strategies are learnt across iterations. In this work, we analyze the training dynamics for generation models, focusing on summarization. Across different datasets (CNN/DM, XSum, MediaSum) and summary properties, such as abstractiveness and hallucination, we study what the model learns at different stages of its fine-tuning process. We find that a propensity to copy the input is learned early in the training process consistently across all datasets studied. On the other hand, factual errors, such as hallucination of unsupported facts, are learnt in the later stages, though this behavior is more varied across domains. Based on these observations, we explore complementary approaches for modifying training: first, disregarding high-loss tokens that are challenging to learn and second, disregarding low-loss tokens that are learnt very quickly in the latter stages of the training process. We show that these simple training modifications allow us to configure our model to achieve different goals, such as improving factuality or improving abstractiveness.
null
null
10.18653/v1/2022.findings-acl.163
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,088
inproceedings
zhou-etal-2022-richer
Richer Countries and Richer Representations
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.164/
Zhou, Kaitlyn and Ethayarajh, Kawin and Jurafsky, Dan
Findings of the Association for Computational Linguistics: ACL 2022
2074--2085
We examine whether some countries are more richly represented in embedding space than others. We find that countries whose names occur with low frequency in training corpora are more likely to be tokenized into subwords, are less semantically distinct in embedding space, and are less likely to be correctly predicted: e.g., Ghana (the correct answer and in-vocabulary) is not predicted for, {\textquotedblleft}The country producing the most cocoa is [MASK].{\textquotedblright}. Although these performance discrepancies and representational harms are due to frequency, we find that frequency is highly correlated with a country`s GDP; thus perpetuating historic power and wealth inequalities. We analyze the effectiveness of mitigation strategies; recommend that researchers report training word frequencies; and recommend future work for the community to define and design representational guarantees.
null
null
10.18653/v1/2022.findings-acl.164
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,089
inproceedings
parrish-etal-2022-bbq
{BBQ}: A hand-built bias benchmark for question answering
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.165/
Parrish, Alicia and Chen, Angelica and Nangia, Nikita and Padmakumar, Vishakh and Phang, Jason and Thompson, Jana and Htut, Phu Mon and Bowman, Samuel
Findings of the Association for Computational Linguistics: ACL 2022
2086--2105
It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question-sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts. Our task evaluate model responses at two levels: (i) given an under-informative context, we test how strongly responses reflect social biases, and (ii) given an adequately informative context, we test whether the model`s biases override a correct answer choice. We find that models often rely on stereotypes when the context is under-informative, meaning the model`s outputs consistently reproduce harmful biases in this setting. Though models are more accurate when the context provides an informative answer, they still rely on stereotypes and average up to 3.4 percentage points higher accuracy when the correct answer aligns with a social bias than when it conflicts, with this difference widening to over 5 points on examples targeting gender for most models tested.
null
null
10.18653/v1/2022.findings-acl.165
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,090
inproceedings
li-etal-2022-zero
Zero-shot Learning for Grapheme to Phoneme Conversion with Language Ensemble
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.166/
Li, Xinjian and Metze, Florian and Mortensen, David and Watanabe, Shinji and Black, Alan
Findings of the Association for Computational Linguistics: ACL 2022
2106--2115
Grapheme-to-Phoneme (G2P) has many applications in NLP and speech fields. Most existing work focuses heavily on languages with abundant training datasets, which limits the scope of target languages to less than 100 languages. This work attempts to apply zero-shot learning to approximate G2P models for all low-resource and endangered languages in Glottolog (about 8k languages). For any unseen target language, we first build the phylogenetic tree (i.e. language family tree) to identify top-$k$ nearest languages for which we have training sets. Then we run models of those languages to obtain a hypothesis set, which we combine into a confusion network to propose a most likely hypothesis as an approximation to the target language. We test our approach on over 600 unseen languages and demonstrate it significantly outperforms baselines.
null
null
10.18653/v1/2022.findings-acl.166
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,091
inproceedings
forbes-etal-2022-dim
Dim Wihl Gat Tun: {T}he Case for Linguistic Expertise in {NLP} for Under-Documented Languages
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.167/
Forbes, Clarissa and Samir, Farhan and Oliver, Bruce and Yang, Changbing and Coates, Edith and Nicolai, Garrett and Silfverberg, Miikka
Findings of the Association for Computational Linguistics: ACL 2022
2116--2130
Recent progress in NLP is driven by pretrained models leveraging massive datasets and has predominantly benefited the world`s political and economic superpowers. Technologically underserved languages are left behind because they lack such resources. Hundreds of underserved languages, nevertheless, have available data sources in the form of interlinear glossed text (IGT) from language documentation efforts. IGT remains underutilized in NLP work, perhaps because its annotations are only semi-structured and often language-specific. With this paper, we make the case that IGT data can be leveraged successfully provided that target language expertise is available. We specifically advocate for collaboration with documentary linguists. Our paper provides a roadmap for successful projects utilizing IGT data: (1) It is essential to define which NLP tasks can be accomplished with the given IGT data and how these will benefit the speech community. (2) Great care and target language expertise is required when converting the data into structured formats commonly employed in NLP. (3) Task-specific and user-specific evaluation can help to ascertain that the tools which are created benefit the target language speech community. We illustrate each step through a case study on developing a morphological reinflection system for the Tsimchianic language Gitksan.
null
null
10.18653/v1/2022.findings-acl.167
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,092
inproceedings
ghanem-etal-2022-question
Question Generation for Reading Comprehension Assessment by Modeling How and What to Ask
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.168/
Ghanem, Bilal and Lutz Coleman, Lauren and Rivard Dexter, Julia and von der Ohe, Spencer and Fyshe, Alona
Findings of the Association for Computational Linguistics: ACL 2022
2131--2146
Reading is integral to everyday life, and yet learning to read is a struggle for many young learners. During lessons, teachers can use comprehension questions to increase engagement, test reading skills, and improve retention. Historically such questions were written by skilled teachers, but recently language models have been used to generate comprehension questions. However, many existing Question Generation (QG) systems focus on generating extractive questions from the text, and have no way to control the type of the generated question. In this paper, we study QG for reading comprehension where inferential questions are critical and extractive techniques cannot be used. We propose a two-step model (HTA-WTA) that takes advantage of previous datasets, and can generate questions for a specific targeted comprehension skill. We propose a new reading comprehension dataset that contains questions annotated with story-based reading comprehension skills (SBRCS), allowing for a more complete reader assessment. Across several experiments, our results show that HTA-WTA outperforms multiple strong baselines on this new dataset. We show that the HTA-WTA model tests for strong SCRS by asking deep inferential questions.
null
null
10.18653/v1/2022.findings-acl.168
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,093
inproceedings
leszczynski-etal-2022-tabi
{TAB}i: {T}ype-Aware Bi-Encoders for Open-Domain Entity Retrieval
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.169/
Leszczynski, Megan and Fu, Daniel and Chen, Mayee and Re, Christopher
Findings of the Association for Computational Linguistics: ACL 2022
2147--2166
Entity retrieval{---}retrieving information about entity mentions in a query{---}is a key step in open-domain tasks, such as question answering or fact checking. However, state-of-the-art entity retrievers struggle to retrieve rare entities for ambiguous mentions due to biases towards popular entities. Incorporating knowledge graph types during training could help overcome popularity biases, but there are several challenges: (1) existing type-based retrieval methods require mention boundaries as input, but open-domain tasks run on unstructured text, (2) type-based methods should not compromise overall performance, and (3) type-based methods should be robust to noisy and missing types. In this work, we introduce TABi, a method to jointly train bi-encoders on knowledge graph types and unstructured text for entity retrieval for open-domain tasks. TABi leverages a type-enforced contrastive loss to encourage entities and queries of similar types to be close in the embedding space. TABi improves retrieval of rare entities on the Ambiguous Entity Retrieval (AmbER) sets, while maintaining strong overall retrieval performance on open-domain tasks in the KILT benchmark compared to state-of-the-art retrievers. TABi is also robust to incomplete type systems, improving rare entity retrieval over baselines with only 5{\%} type coverage of the training dataset. We make our code publicly available.
null
null
10.18653/v1/2022.findings-acl.169
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,094
inproceedings
zhou-etal-2022-hierarchical
Hierarchical Recurrent Aggregative Generation for Few-Shot {NLG}
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.170/
Zhou, Giulio and Lampouras, Gerasimos and Iacobacci, Ignacio
Findings of the Association for Computational Linguistics: ACL 2022
2167--2181
Large pretrained models enable transfer learning to low-resource domains for language generation tasks. However, previous end-to-end approaches do not account for the fact that some generation sub-tasks, specifically aggregation and lexicalisation, can benefit from transfer learning in different extents. To exploit these varying potentials for transfer learning, we propose a new hierarchical approach for few-shot and zero-shot generation. Our approach consists of a three-moduled jointly trained architecture: the first module independently lexicalises the distinct units of information in the input as sentence sub-units (e.g. phrases), the second module recurrently aggregates these sub-units to generate a unified intermediate output, while the third module subsequently post-edits it to generate a coherent and fluent final text. We perform extensive empirical analysis and ablation studies on few-shot and zero-shot settings across 4 datasets. Automatic and human evaluation shows that the proposed hierarchical approach is consistently capable of achieving state-of-the-art results when compared to previous work.
null
null
10.18653/v1/2022.findings-acl.170
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,095
inproceedings
ponomareva-etal-2022-training
Training Text-to-Text Transformers with Privacy Guarantees
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.171/
Ponomareva, Natalia and Bastings, Jasmijn and Vassilvitskii, Sergei
Findings of the Association for Computational Linguistics: ACL 2022
2182--2193
Recent advances in NLP often stem from large transformer-based pre-trained models, which rapidly grow in size and use more and more training data. Such models are often released to the public so that end users can fine-tune them on a task dataset. While it is common to treat pre-training data as public, it may still contain personally identifiable information (PII), such as names, phone numbers, and copyrighted material. Recent findings show that the capacity of these models allows them to memorize parts of the training data, and suggest differentially private (DP) training as a potential mitigation. While there is recent work on DP fine-tuning of NLP models, the effects of DP pre-training are less well understood: it is not clear how downstream performance is affected by DP pre-training, and whether DP pre-training mitigates some of the memorization concerns. We focus on T5 and show that by using recent advances in JAX and XLA we can train models with DP that do not suffer a large drop in pre-training utility, nor in training speed, and can still be fine-tuned to high accuracies on downstream tasks (e.g. GLUE). Moreover, we show that T5`s span corruption is a good defense against data memorization.
null
null
10.18653/v1/2022.findings-acl.171
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,096
inproceedings
schroder-etal-2022-revisiting
Revisiting Uncertainty-based Query Strategies for Active Learning with Transformers
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.172/
Schr{\"oder, Christopher and Niekler, Andreas and Potthast, Martin
Findings of the Association for Computational Linguistics: ACL 2022
2194--2203
Active learning is the iterative construction of a classification model through targeted labeling, enabling significant labeling cost savings. As most research on active learning has been carried out before transformer-based language models ({\textquotedblleft}transformers{\textquotedblright}) became popular, despite its practical importance, comparably few papers have investigated how transformers can be combined with active learning to date. This can be attributed to the fact that using state-of-the-art query strategies for transformers induces a prohibitive runtime overhead, which effectively nullifies, or even outweighs the desired cost savings. For this reason, we revisit uncertainty-based query strategies, which had been largely outperformed before, but are particularly suited in the context of fine-tuning transformers. In an extensive evaluation, we connect transformers to experiments from previous research, assessing their performance on five widely used text classification benchmarks. For active learning with transformers, several other uncertainty-based approaches outperform the well-known prediction entropy query strategy, thereby challenging its status as most popular uncertainty baseline in active learning for text classification.
null
null
10.18653/v1/2022.findings-acl.172
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,097
inproceedings
beau-crabbe-2022-impact
The impact of lexical and grammatical processing on generating code from natural language
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.173/
Beau, Nathana{\"el and Crabb{\'e, Benoit
Findings of the Association for Computational Linguistics: ACL 2022
2204--2214
Considering the seq2seq architecture of Yin and Neubig (2018) for natural language to code translation, we identify four key components of importance: grammatical constraints, lexical preprocessing, input representations, and copy mechanisms. To study the impact of these components, we use a state-of-the-art architecture that relies on BERT encoder and a grammar-based decoder for which a formalization is provided. The paper highlights the importance of the lexical substitution component in the current natural language to code systems.
null
null
10.18653/v1/2022.findings-acl.173
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,098
inproceedings
mao-etal-2022-seq2path
{S}eq2{P}ath: Generating Sentiment Tuples as Paths of a Tree
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.174/
Mao, Yue and Shen, Yi and Yang, Jingchao and Zhu, Xiaoying and Cai, Longjun
Findings of the Association for Computational Linguistics: ACL 2022
2215--2225
Aspect-based sentiment analysis (ABSA) tasks aim to extract sentiment tuples from a sentence. Recent generative methods such as Seq2Seq models have achieved good performance by formulating the output as a sequence of sentiment tuples. However, the orders between the sentiment tuples do not naturally exist and the generation of the current tuple should not condition on the previous ones. In this paper, we propose Seq2Path to generate sentiment tuples as paths of a tree. A tree can represent {\textquotedblleft}1-to-n{\textquotedblright} relations (e.g., an aspect term may correspond to multiple opinion terms) and the paths of a tree are independent and do not have orders. For training, we treat each path as an independent target, and we calculate the average loss of the ordinary Seq2Seq model over paths. For inference, we apply beam search with constrained decoding. By introducing an additional discriminative token and applying a data augmentation technique, valid paths can be automatically selected. We conduct experiments on five tasks including AOPE, ASTE, TASD, UABSA, ACOS. We evaluate our method on four common benchmark datasets including Laptop14, Rest14, Rest15, Rest16. Our proposed method achieves state-of-the-art results in almost all cases.
null
null
10.18653/v1/2022.findings-acl.174
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,099
inproceedings
zhan-etal-2022-mitigating
Mitigating the Inconsistency Between Word Saliency and Model Confidence with Pathological Contrastive Training
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.175/
Zhan, Pengwei and Wu, Yang and Zhou, Shaolei and Zhang, Yunjian and Wang, Liming
Findings of the Association for Computational Linguistics: ACL 2022
2226--2244
Neural networks are widely used in various NLP tasks for their remarkable performance. However, the complexity makes them difficult to interpret, i.e., they are not guaranteed right for the right reason. Besides the complexity, we reveal that the model pathology - the inconsistency between word saliency and model confidence, further hurts the interpretability. We show that the pathological inconsistency is caused by the representation collapse issue, which means that the representation of the sentences with tokens in different saliency reduced is somehow collapsed, and thus the important words cannot be distinguished from unimportant words in terms of model confidence changing. In this paper, to mitigate the pathology and obtain more interpretable models, we propose Pathological Contrastive Training (PCT) framework, which adopts contrastive learning and saliency-based samples augmentation to calibrate the sentences representation. Combined with qualitative analysis, we also conduct extensive quantitative experiments and measure the interpretability with eight reasonable metrics. Experiments show that our method can mitigate the model pathology and generate more interpretable models while keeping the model performance. Ablation study also shows the effectiveness.
null
null
10.18653/v1/2022.findings-acl.175
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,100
inproceedings
baldini-etal-2022-fairness
Your fairness may vary: Pretrained language model fairness in toxic text classification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.176/
Baldini, Ioana and Wei, Dennis and Natesan Ramamurthy, Karthikeyan and Singh, Moninder and Yurochkin, Mikhail
Findings of the Association for Computational Linguistics: ACL 2022
2245--2262
The popularity of pretrained language models in natural language processing systems calls for a careful evaluation of such models in down-stream tasks, which have a higher potential for societal impact. The evaluation of such systems usually focuses on accuracy measures. Our findings in this paper call for attention to be paid to fairness measures as well. Through the analysis of more than a dozen pretrained language models of varying sizes on two toxic text classification tasks (English), we demonstrate that focusing on accuracy measures alone can lead to models with wide variation in fairness characteristics. Specifically, we observe that fairness can vary even more than accuracy with increasing training data size and different random initializations. At the same time, we find that little of the fairness variation is explained by model size, despite claims in the literature. To improve model fairness without retraining, we show that two post-processing methods developed for structured, tabular data can be successfully applied to a range of pretrained language models. Warning: This paper contains samples of offensive text.
null
null
10.18653/v1/2022.findings-acl.176
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,101
inproceedings
masry-etal-2022-chartqa
{C}hart{QA}: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.177/
Masry, Ahmed and Do, Xuan Long and Tan, Jia Qing and Joty, Shafiq and Hoque, Enamul
Findings of the Association for Computational Linguistics: ACL 2022
2263--2279
Charts are very popular for analyzing data. When exploring charts, people often ask a variety of complex reasoning questions that involve several logical and arithmetic operations. They also commonly refer to visual features of a chart in their questions. However, most existing datasets do not focus on such complex reasoning questions as their questions are template-based and answers come from a fixed-vocabulary. In this work, we present a large-scale benchmark covering 9.6K human-written questions as well as 23.1K questions generated from human-written chart summaries. To address the unique challenges in our benchmark involving visual and logical reasoning over charts, we present two transformer-based models that combine visual features and the data table of the chart in a unified way to answer questions. While our models achieve the state-of-the-art results on the previous datasets as well as on our benchmark, the evaluation also reveals several challenges in answering complex reasoning questions.
null
null
10.18653/v1/2022.findings-acl.177
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,102
inproceedings
liu-etal-2022-novel
A Novel Perspective to Look At Attention: Bi-level Attention-based Explainable Topic Modeling for News Classification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.178/
Liu, Dairui and Greene, Derek and Dong, Ruihai
Findings of the Association for Computational Linguistics: ACL 2022
2280--2290
Many recent deep learning-based solutions have adopted the attention mechanism in various tasks in the field of NLP. However, the inherent characteristics of deep learning models and the flexibility of the attention mechanism increase the models' complexity, thus leading to challenges in model explainability. To address this challenge, we propose a novel practical framework by utilizing a two-tier attention architecture to decouple the complexity of explanation and the decision-making process. We apply it in the context of a news article classification task. The experiments on two large-scaled news corpora demonstrate that the proposed model can achieve competitive performance with many state-of-the-art alternatives and illustrate its appropriateness from an explainability perspective. We release the source code here.
null
null
10.18653/v1/2022.findings-acl.178
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,103
inproceedings
xia-etal-2022-learn
Learn and Review: Enhancing Continual Named Entity Recognition via Reviewing Synthetic Samples
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.179/
Xia, Yu and Wang, Quan and Lyu, Yajuan and Zhu, Yong and Wu, Wenhao and Li, Sujian and Dai, Dai
Findings of the Association for Computational Linguistics: ACL 2022
2291--2300
Traditional methods for named entity recognition (NER) classify mentions into a fixed set of pre-defined entity types. However, in many real-world scenarios, new entity types are incrementally involved. To investigate this problem, continual learning is introduced for NER. However, the existing method depends on the relevance between tasks and is prone to inter-type confusion. In this paper, we propose a novel two-stage framework Learn-and-Review (L{\&}R) for continual NER under the type-incremental setting to alleviate the above issues. Specifically, for the learning stage, we distill the old knowledge from teacher to a student on the current dataset. For the reviewing stage, we first generate synthetic samples of old types to augment the dataset. Then, we further distill new knowledge from the above student and old knowledge from the teacher to get an enhanced student on the augmented dataset. This stage has the following advantages: (1) The synthetic samples mitigate the gap between the old and new task and thus enhance the further distillation; (2) Different types of entities are jointly seen during training which alleviates the inter-type confusion. Experimental results show that L{\&}R outperforms the state-of-the-art method on CoNLL-03 and OntoNotes-5.0.
null
null
10.18653/v1/2022.findings-acl.179
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,104
inproceedings
boulianne-2022-phoneme
Phoneme transcription of endangered languages: an evaluation of recent {ASR} architectures in the single speaker scenario
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.180/
Boulianne, Gilles
Findings of the Association for Computational Linguistics: ACL 2022
2301--2308
Transcription is often reported as the bottleneck in endangered language documentation, requiring large efforts from scarce speakers and transcribers. In general, automatic speech recognition (ASR) can be accurate enough to accelerate transcription only if trained on large amounts of transcribed data. However, when a single speaker is involved, several studies have reported encouraging results for phonetic transcription even with small amounts of training. Here we expand this body of work on speaker-dependent transcription by comparing four ASR approaches, notably recent transformer and pretrained multilingual models, on a common dataset of 11 languages. To automate data preparation, training and evaluation steps, we also developed a phoneme recognition setup which handles morphologically complex languages and writing systems for which no pronunciation dictionary exists. We find that fine-tuning a multilingual pretrained model yields an average phoneme error rate (PER) of 15{\%} for 6 languages with 99 minutes or less of transcribed data for training. For the 5 languages with between 100 and 192 minutes of training, we achieved a PER of 8.4{\%} or less. These results on a number of varied languages suggest that ASR can now significantly reduce transcription efforts in the speaker-dependent situation common in endangered language work.
null
null
10.18653/v1/2022.findings-acl.180
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,105
inproceedings
lasri-etal-2022-bert
Does {BERT} really agree ? Fine-grained Analysis of Lexical Dependence on a Syntactic Task
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.181/
Lasri, Karim and Lenci, Alessandro and Poibeau, Thierry
Findings of the Association for Computational Linguistics: ACL 2022
2309--2315
Although transformer-based Neural Language Models demonstrate impressive performance on a variety of tasks, their generalization abilities are not well understood. They have been shown to perform strongly on subject-verb number agreement in a wide array of settings, suggesting that they learned to track syntactic dependencies during their training even without explicit supervision. In this paper, we examine the extent to which BERT is able to perform lexically-independent subject-verb number agreement (NA) on targeted syntactic templates. To do so, we disrupt the lexical patterns found in naturally occurring stimuli for each targeted structure in a novel fine-grained analysis of BERT`s behavior. Our results on nonce sentences suggest that the model generalizes well for simple templates, but fails to perform lexically-independent syntactic generalization when as little as one attractor is present.
null
null
10.18653/v1/2022.findings-acl.181
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,106
inproceedings
hammerl-etal-2022-combining
Combining Static and Contextualised Multilingual Embeddings
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.182/
H{\"ammerl, Katharina and Libovick{\'y, Jind{\v{rich and Fraser, Alexander
Findings of the Association for Computational Linguistics: ACL 2022
2316--2329
Static and contextual multilingual embeddings have complementary strengths. Static embeddings, while less expressive than contextual language models, can be more straightforwardly aligned across multiple languages. We combine the strengths of static and contextual models to improve multilingual representations. We extract static embeddings for 40 languages from XLM-R, validate those embeddings with cross-lingual word retrieval, and then align them using VecMap. This results in high-quality, highly multilingual static embeddings. Then we apply a novel continued pre-training approach to XLM-R, leveraging the high quality alignment of our static embeddings to better align the representation space of XLM-R. We show positive results for multiple complex semantic tasks. We release the static embeddings and the continued pre-training code. Unlike most previous work, our continued pre-training approach does not require parallel text.
null
null
10.18653/v1/2022.findings-acl.182
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,107
inproceedings
luo-yu-2022-accurate
An Accurate Unsupervised Method for Joint Entity Alignment and Dangling Entity Detection
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.183/
Luo, Shengxuan and Yu, Sheng
Findings of the Association for Computational Linguistics: ACL 2022
2330--2339
Knowledge graph integration typically suffers from the widely existing dangling entities that cannot find alignment cross knowledge graphs (KGs). The dangling entity set is unavailable in most real-world scenarios, and manually mining the entity pairs that consist of entities with the same meaning is labor-consuming. In this paper, we propose a novel accurate Unsupervised method for joint Entity alignment (EA) and Dangling entity detection (DED), called UED. The UED mines the literal semantic information to generate pseudo entity pairs and globally guided alignment information for EA and then utilizes the EA results to assist the DED. We construct a medical cross-lingual knowledge graph dataset, MedED, providing data for both the EA and DED tasks. Extensive experiments demonstrate that in the EA task, UED achieves EA results comparable to those of state-of-the-art supervised EA baselines and outperforms the current state-of-the-art EA methods by combining supervised EA data. For the DED task, UED obtains high-quality results without supervision.
null
null
10.18653/v1/2022.findings-acl.183
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,108
inproceedings
ruder-etal-2022-square
Square One Bias in {NLP}: Towards a Multi-Dimensional Exploration of the Research Manifold
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.184/
Ruder, Sebastian and Vuli{\'c}, Ivan and S{\o}gaard, Anders
Findings of the Association for Computational Linguistics: ACL 2022
2340--2354
The prototypical NLP experiment trains a standard architecture on labeled English data and optimizes for accuracy, without accounting for other dimensions such as fairness, interpretability, or computational efficiency. We show through a manual classification of recent NLP research papers that this is indeed the case and refer to it as the square one experimental setup. We observe that NLP research often goes beyond the square one setup, e.g, focusing not only on accuracy, but also on fairness or interpretability, but typically only along a single dimension. Most work targeting multilinguality, for example, considers only accuracy; most work on fairness or interpretability considers only English; and so on. Such one-dimensionality of most research means we are only exploring a fraction of the NLP research search space. We provide historical and recent examples of how the square one bias has led researchers to draw false conclusions or make unwise choices, point to promising yet unexplored directions on the research manifold, and make practical recommendations to enable more multi-dimensional research. We open-source the results of our annotations to enable further analysis.
null
null
10.18653/v1/2022.findings-acl.184
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,109
inproceedings
manino-etal-2022-systematicity
Systematicity, Compositionality and Transitivity of Deep {NLP} Models: a Metamorphic Testing Perspective
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.185/
Manino, Edoardo and Rozanova, Julia and Carvalho, Danilo and Freitas, Andre and Cordeiro, Lucas
Findings of the Association for Computational Linguistics: ACL 2022
2355--2366
Metamorphic testing has recently been used to check the safety of neural NLP models. Its main advantage is that it does not rely on a ground truth to generate test cases. However, existing studies are mostly concerned with robustness-like metamorphic relations, limiting the scope of linguistic properties they can test. We propose three new classes of metamorphic relations, which address the properties of systematicity, compositionality and transitivity. Unlike robustness, our relations are defined over multiple source inputs, thus increasing the number of test cases that we can produce by a polynomial factor. With them, we test the internal consistency of state-of-the-art NLP models, and show that they do not always behave according to their expected linguistic properties. Lastly, we introduce a novel graphical notation that efficiently summarises the inner structure of metamorphic relations.
null
null
10.18653/v1/2022.findings-acl.185
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,110
inproceedings
dayanik-etal-2022-improving
Improving Neural Political Statement Classification with Class Hierarchical Information
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.186/
Dayanik, Erenay and Blessing, Andre and Blokker, Nico and Haunss, Sebastian and Kuhn, Jonas and Lapesa, Gabriella and Pado, Sebastian
Findings of the Association for Computational Linguistics: ACL 2022
2367--2382
Many tasks in text-based computational social science (CSS) involve the classification of political statements into categories based on a domain-specific codebook. In order to be useful for CSS analysis, these categories must be fine-grained. The typically skewed distribution of fine-grained categories, however, results in a challenging classification problem on the NLP side. This paper proposes to make use of the hierarchical relations among categories typically present in such codebooks:e.g., markets and taxation are both subcategories of economy, while borders is a subcategory of security. We use these ontological relations as prior knowledge to establish additional constraints on the learned model, thusimproving performance overall and in particular for infrequent categories. We evaluate several lightweight variants of this intuition by extending state-of-the-art transformer-based textclassifiers on two datasets and multiple languages. We find the most consistent improvement for an approach based on regularization.
null
null
10.18653/v1/2022.findings-acl.186
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,111
inproceedings
dai-etal-2022-enabling
Enabling Multimodal Generation on {CLIP} via Vision-Language Knowledge Distillation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.187/
Dai, Wenliang and Hou, Lu and Shang, Lifeng and Jiang, Xin and Liu, Qun and Fung, Pascale
Findings of the Association for Computational Linguistics: ACL 2022
2383--2395
The recent large-scale vision-language pre-training (VLP) of dual-stream architectures (e.g., CLIP) with a tremendous amount of image-text pair data, has shown its superiority on various multimodal alignment tasks. Despite its success, the resulting models are not capable of multimodal generative tasks due to the weak text encoder. To tackle this problem, we propose to augment the dual-stream VLP model with a textual pre-trained language model (PLM) via vision-language knowledge distillation (VLKD), enabling the capability for multimodal generation. VLKD is pretty data- and computation-efficient compared to the pre-training from scratch. Experimental results show that the resulting model has strong zero-shot performance on multimodal generation tasks, such as open-ended visual question answering and image captioning. For example, it achieves 44.5{\%} zero-shot accuracy on the VQAv2 dataset, surpassing the previous state-of-the-art zero-shot model with $7\times$ fewer parameters. Furthermore, the original textual language understanding and generation ability of the PLM is maintained after VLKD, which makes our model versatile for both multimodal and unimodal tasks.
null
null
10.18653/v1/2022.findings-acl.187
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,112
inproceedings
wang-etal-2022-co
Co-{VQA} : Answering by Interactive Sub Question Sequence
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.188/
Wang, Ruonan and Qian, Yuxi and Feng, Fangxiang and Wang, Xiaojie and Jiang, Huixing
Findings of the Association for Computational Linguistics: ACL 2022
2396--2408
Most existing approaches to Visual Question Answering (VQA) answer questions directly, however, people usually decompose a complex question into a sequence of simple sub questions and finally obtain the answer to the original question after answering the sub question sequence(SQS). By simulating the process, this paper proposes a conversation-based VQA (Co-VQA) framework, which consists of three components: Questioner, Oracle, and Answerer. Questioner raises the sub questions using an extending HRED model, and Oracle answers them one-by-one. An Adaptive Chain Visual Reasoning Model (ACVRM) for Answerer is also proposed, where the question-answer pair is used to update the visual representation sequentially. To perform supervised learning for each model, we introduce a well-designed method to build a SQS for each question on VQA 2.0 and VQA-CP v2 datasets. Experimental results show that our method achieves state-of-the-art on VQA-CP v2. Further analyses show that SQSs help build direct semantic connections between questions and images, provide question-adaptive variable-length reasoning chains, and with explicit interpretability as well as error traceability.
null
null
10.18653/v1/2022.findings-acl.188
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,113
inproceedings
sun-etal-2022-simple
A Simple Hash-Based Early Exiting Approach For Language Understanding and Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.189/
Sun, Tianxiang and Liu, Xiangyang and Zhu, Wei and Geng, Zhichao and Wu, Lingling and He, Yilong and Ni, Yuan and Xie, Guotong and Huang, Xuanjing and Qiu, Xipeng
Findings of the Association for Computational Linguistics: ACL 2022
2409--2421
Early exiting allows instances to exit at different layers according to the estimation of difficulty. Previous works usually adopt heuristic metrics such as the entropy of internal outputs to measure instance difficulty, which suffers from generalization and threshold-tuning. In contrast, learning to exit, or learning to predict instance difficulty is a more appealing way. Though some effort has been devoted to employing such {\textquotedblleft}learn-to-exit{\textquotedblright} modules, it is still unknown whether and how well the instance difficulty can be learned. As a response, we first conduct experiments on the learnability of instance difficulty, which demonstrates that modern neural models perform poorly on predicting instance difficulty. Based on this observation, we propose a simple-yet-effective Hash-based Early Exiting approach HashEE) that replaces the learn-to-exit modules with hash functions to assign each token to a fixed exiting layer. Different from previous methods, HashEE requires no internal classifiers nor extra parameters, and therefore is more efficient. HashEE can be used in various tasks (including language understanding and generation) and model architectures such as seq2seq models. Experimental results on classification, regression, and generation tasks demonstrate that HashEE can achieve higher performance with fewer FLOPs and inference time compared with previous state-of-the-art early exiting methods.
null
null
10.18653/v1/2022.findings-acl.189
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,114
inproceedings
candito-2022-auxiliary
Auxiliary tasks to boost Biaffine Semantic Dependency Parsing
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.190/
Candito, Marie
Findings of the Association for Computational Linguistics: ACL 2022
2422--2429
The biaffine parser of (CITATION) was successfully extended to semantic dependency parsing (SDP) (CITATION). Its performance on graphs is surprisingly high given that, without the constraint of producing a tree, all arcs for a given sentence are predicted independently from each other (modulo a shared representation of tokens).To circumvent such an independence of decision, while retaining the $O(n^2)$ complexity and highly parallelizable architecture, we propose to use simple auxiliary tasks that introduce some form of interdependence between arcs. Experiments on the three English acyclic datasets of SemEval-2015 task 18 (CITATION), and on French deep syntactic cyclic graphs (CITATION) show modest but systematic performance gains on a near-state-of-the-art baseline using transformer-based contextualized representations. This provides a simple and robust method to boost SDP performance.
null
null
10.18653/v1/2022.findings-acl.190
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,115
inproceedings
zhang-etal-2022-syntax
Syntax-guided Contrastive Learning for Pre-trained Language Model
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.191/
Zhang, Shuai and Lijie, Wang and Xiao, Xinyan and Wu, Hua
Findings of the Association for Computational Linguistics: ACL 2022
2430--2440
Syntactic information has been proved to be useful for transformer-based pre-trained language models. Previous studies often rely on additional syntax-guided attention components to enhance the transformer, which require more parameters and additional syntactic parsing in downstream tasks. This increase in complexity severely limits the application of syntax-enhanced language model in a wide range of scenarios. In order to inject syntactic knowledge effectively and efficiently into pre-trained language models, we propose a novel syntax-guided contrastive learning method which does not change the transformer architecture. Based on constituency and dependency structures of syntax trees, we design phrase-guided and tree-guided contrastive objectives, and optimize them in the pre-training stage, so as to help the pre-trained language model to capture rich syntactic knowledge in its representations. Experimental results show that our contrastive method achieves consistent improvements in a variety of tasks, including grammatical error detection, entity tasks, structural probing and GLUE. Detailed analysis further verifies that the improvements come from the utilization of syntactic information, and the learned attention weights are more explainable in terms of linguistics.
null
null
10.18653/v1/2022.findings-acl.191
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,116
inproceedings
chalkidis-sogaard-2022-improved
Improved Multi-label Classification under Temporal Concept Drift: Rethinking Group-Robust Algorithms in a Label-Wise Setting
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.192/
Chalkidis, Ilias and S{\o}gaard, Anders
Findings of the Association for Computational Linguistics: ACL 2022
2441--2454
In document classification for, e.g., legal and biomedical text, we often deal with hundreds of classes, including very infrequent ones, as well as temporal concept drift caused by the influence of real world events, e.g., policy changes, conflicts, or pandemics. Class imbalance and drift can sometimes be mitigated by resampling the training data to simulate (or compensate for) a known target distribution, but what if the target distribution is determined by unknown future events? Instead of simply resampling uniformly to hedge our bets, we focus on the underlying optimization algorithms used to train such document classifiers and evaluate several group-robust optimization algorithms, initially proposed to mitigate group-level disparities. Reframing group-robust algorithms as adaptation algorithms under concept drift, we find that Invariant Risk Minimization and Spectral Decoupling outperform sampling-based approaches to class imbalance and concept drift, and lead to much better performance on minority classes. The effect is more pronounced the larger the label set.
null
null
10.18653/v1/2022.findings-acl.192
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,117
inproceedings
wang-etal-2022-ascm
{ASCM}: An Answer Space Clustered Prompting Method without Answer Engineering
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.193/
Wang, Zhen and Yang, Yating and Xi, Zhou and Ma, Bo and Wang, Lei and Dong, Rui and Anwar, Azmat
Findings of the Association for Computational Linguistics: ACL 2022
2455--2469
Prompt-based learning, which exploits knowledge from pre-trained language models by providing textual prompts and designing appropriate answer-category mapping methods, has achieved impressive successes on few-shot text classification and natural language inference (NLI). Because of the diverse linguistic expression, there exist many answer tokens for the same category. However, both manual answer design and automatic answer search constrain answer space and therefore hardly achieve ideal performance. To address this issue, we propose an answer space clustered prompting model (ASCM) together with a synonym initialization method (SI) which automatically categorizes all answer tokens in a semantic-clustered embedding space. We also propose a stable semi-supervised method named stair learning (SL) that orderly distills knowledge from better models to weaker models. Extensive experiments demonstrate that our ASCM+SL significantly outperforms existing state-of-the-art techniques in few-shot settings.
null
null
10.18653/v1/2022.findings-acl.193
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,118
inproceedings
libovicky-etal-2022-dont
Why don`t people use character-level machine translation?
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.194/
Libovick{\'y}, Jind{\v{r}}ich and Schmid, Helmut and Fraser, Alexander
Findings of the Association for Computational Linguistics: ACL 2022
2470--2485
We present a literature and empirical survey that critically assesses the state of the art in character-level modeling for machine translation (MT). Despite evidence in the literature that character-level systems are comparable with subword systems, they are virtually never used in competitive setups in WMT competitions. We empirically show that even with recent modeling innovations in character-level natural language processing, character-level MT systems still struggle to match their subword-based counterparts. Character-level MT systems show neither better domain robustness, nor better morphological generalization, despite being often so motivated. However, we are able to show robustness towards source side noise and that translation quality does not degrade with increasing beam size at decoding time.
null
null
10.18653/v1/2022.findings-acl.194
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,119
inproceedings
li-etal-2022-seeking
Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.195/
Li, Zhongli and Zhang, Wenxuan and Yan, Chao and Zhou, Qingyu and Li, Chao and Liu, Hongzhi and Cao, Yunbo
Findings of the Association for Computational Linguistics: ACL 2022
2486--2496
Math Word Problem (MWP) solving needs to discover the quantitative relationships over natural language narratives. Recent work shows that existing models memorize procedures from context and rely on shallow heuristics to solve MWPs. In this paper, we look at this issue and argue that the cause is a lack of overall understanding of MWP patterns. We first investigate how a neural network understands patterns only from semantics, and observe that, if the prototype equations are the same, most problems get closer representations and those representations apart from them or close to other prototypes tend to produce wrong solutions. Inspired by it, we propose a contrastive learning approach, where the neural network perceives the divergence of patterns. We collect contrastive examples by converting the prototype equation into a tree and seeking similar tree structures. The solving model is trained with an auxiliary objective on the collected examples, resulting in the representations of problems with similar prototypes being pulled closer. We conduct experiments on the Chinese dataset Math23k and the English dataset MathQA. Our method greatly improves the performance in monolingual and multilingual settings.
null
null
10.18653/v1/2022.findings-acl.195
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,120
inproceedings
pfeiffer-etal-2022-xgqa
x{GQA}: Cross-Lingual Visual Question Answering
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.196/
Pfeiffer, Jonas and Geigle, Gregor and Kamath, Aishwarya and Steitz, Jan-Martin O. and Roth, Stefan and Vuli{\'c}, Ivan and Gurevych, Iryna
Findings of the Association for Computational Linguistics: ACL 2022
2497--2511
Recent advances in multimodal vision and language modeling have predominantly focused on the English language, mostly due to the lack of multilingual multimodal datasets to steer modeling efforts. In this work, we address this gap and provide xGQA, a new multilingual evaluation benchmark for the visual question answering task. We extend the established English GQA dataset to 7 typologically diverse languages, enabling us to detect and explore crucial challenges in cross-lingual visual question answering. We further propose new adapter-based approaches to adapt multimodal transformer-based models to become multilingual, and{---}vice versa{---}multilingual models to become multimodal. Our proposed methods outperform current state-of-the-art multilingual multimodal models (e.g., M3P) in zero-shot cross-lingual settings, but the accuracy remains low across the board; a performance drop of around 38 accuracy points in target languages showcases the difficulty of zero-shot cross-lingual transfer for this task. Our results suggest that simple cross-lingual transfer of multimodal models yields latent multilingual multimodal misalignment, calling for more sophisticated methods for vision and multilingual language modeling.
null
null
10.18653/v1/2022.findings-acl.196
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,121
inproceedings
macaire-etal-2022-automatic
Automatic Speech Recognition and Query By Example for Creole Languages Documentation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.197/
Macaire, C{\'e}cile and Schwab, Didier and Lecouteux, Benjamin and Schang, Emmanuel
Findings of the Association for Computational Linguistics: ACL 2022
2512--2520
We investigate the exploitation of self-supervised models for two Creole languages with few resources: Gwadloup{\'e}yen and Morisien. Automatic language processing tools are almost non-existent for these two languages. We propose to use about one hour of annotated data to design an automatic speech recognition system for each language. We evaluate how much data is needed to obtain a query-by-example system that is usable by linguists. Moreover, our experiments show that multilingual self-supervised models are not necessarily the most efficient for Creole languages.
null
null
10.18653/v1/2022.findings-acl.197
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,122
inproceedings
shen-etal-2022-mred
{MR}e{D}: A Meta-Review Dataset for Structure-Controllable Text Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.198/
Shen, Chenhui and Cheng, Liying and Zhou, Ran and Bing, Lidong and You, Yang and Si, Luo
Findings of the Association for Computational Linguistics: ACL 2022
2521--2535
When directly using existing text generation datasets for controllable generation, we are facing the problem of not having the domain knowledge and thus the aspects that could be controlled are limited. A typical example is when using CNN/Daily Mail dataset for controllable text summarization, there is no guided information on the emphasis of summary sentences. A more useful text generator should leverage both the input text and the control signal to guide the generation, which can only be built with deep understanding of the domain knowledge. Motivated by this vision, our paper introduces a new text generation dataset, named MReD. Our new dataset consists of 7,089 meta-reviews and all its 45k meta-review sentences are manually annotated with one of the 9 carefully defined categories, including abstract, strength, decision, etc. We present experimental results on start-of-the-art summarization models, and propose methods for structure-controlled generation with both extractive and abstractive models using our annotated data. By exploring various settings and analyzing the model behavior with respect to the control signal, we demonstrate the challenges of our proposed task and the values of our dataset MReD. Meanwhile, MReD also allows us to have a better understanding of the meta-review domain.
null
null
10.18653/v1/2022.findings-acl.198
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,123
inproceedings
takase-etal-2022-single
Single Model Ensemble for Subword Regularized Models in Low-Resource Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.199/
Takase, Sho and Hiraoka, Tatsuya and Okazaki, Naoaki
Findings of the Association for Computational Linguistics: ACL 2022
2536--2541
Subword regularizations use multiple subword segmentations during training to improve the robustness of neural machine translation models. In previous subword regularizations, we use multiple segmentations in the training process but use only one segmentation in the inference. In this study, we propose an inference strategy to address this discrepancy. The proposed strategy approximates the marginalized likelihood by using multiple segmentations including the most plausible segmentation and several sampled segmentations. Because the proposed strategy aggregates predictions from several segmentations, we can regard it as a single model ensemble that does not require any additional cost for training. Experimental results show that the proposed strategy improves the performance of models trained with subword regularization in low-resource machine translation tasks.
null
null
10.18653/v1/2022.findings-acl.199
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,124
inproceedings
herold-etal-2022-detecting
Detecting Various Types of Noise for Neural Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.200/
Herold, Christian and Rosendahl, Jan and Vanvinckenroye, Joris and Ney, Hermann
Findings of the Association for Computational Linguistics: ACL 2022
2542--2551
The filtering and/or selection of training data is one of the core aspects to be considered when building a strong machine translation system. In their influential work, Khayrallah and Koehn (2018) investigated the impact of different types of noise on the performance of machine translation systems. In the same year the WMT introduced a shared task on parallel corpus filtering, which went on to be repeated in the following years, and resulted in many different filtering approaches being proposed. In this work we aim to combine the recent achievements in data filtering with the original analysis of Khayrallah and Koehn (2018) and investigate whether state-of-the-art filtering systems are capable of removing all the suggested noise types. We observe that most of these types of noise can be detected with an accuracy of over 90{\%} by modern filtering systems when operating in a well studied high resource setting. However, we also find that when confronted with more refined noise categories or when working with a less common language pair, the performance of the filtering systems is far from optimal, showing that there is still room for improvement in this area of research.
null
null
10.18653/v1/2022.findings-acl.200
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,125
inproceedings
huang-etal-2022-du
{DU}-{VLG}: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.201/
Huang, Luyang and Niu, Guocheng and Liu, Jiachen and Xiao, Xinyan and Wu, Hua
Findings of the Association for Computational Linguistics: ACL 2022
2552--2566
Due to the limitations of the model structure and pre-training objectives, existing vision-and-language generation models cannot utilize pair-wise images and text through bi-directional generation. In this paper, we propose DU-VLG, a framework which unifies vision-and-language generation as sequence generation problems. DU-VLG is trained with novel dual pre-training tasks: multi-modal denoising autoencoder tasks and modality translation tasks. To bridge the gap between image understanding and generation, we further design a novel commitment loss. We compare pre-training objectives on image captioning and text-to-image generation datasets. Results show that DU-VLG yields better performance than variants trained with uni-directional generation objectives or the variant without the commitment loss. We also obtain higher scores compared to previous state-of-the-art systems on three vision-and-language generation tasks. In addition, human judges further confirm that our model generates real and relevant images as well as faithful and informative captions.
null
null
10.18653/v1/2022.findings-acl.201
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,126
inproceedings
li-etal-2022-hiclre
{H}i{CLRE}: A Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.202/
Li, Dongyang and Zhang, Taolin and Hu, Nan and Wang, Chengyu and He, Xiaofeng
Findings of the Association for Computational Linguistics: ACL 2022
2567--2578
Distant supervision assumes that any sentence containing the same entity pairs reflects identical relationships. Previous works of distantly supervised relation extraction (DSRE) task generally focus on sentence-level or bag-level de-noising techniques independently, neglecting the explicit interaction with cross levels. In this paper, we propose a hierarchical contrastive learning Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction. Specifically, we propose a three-level hierarchical learning framework to interact with cross levels, generating the de-noising context-aware representations via adapting the existing multi-head self-attention, named Multi-Granularity Recontextualization. Meanwhile, pseudo positive samples are also provided in the specific level for contrastive learning via a dynamic gradient-based data augmentation strategy, named Dynamic Gradient Adversarial Perturbation. Experiments demonstrate that HiCLRE significantly outperforms strong baselines in various mainstream DSRE datasets.
null
null
10.18653/v1/2022.findings-acl.202
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,127
inproceedings
li-etal-2022-prompt
Prompt-Driven Neural Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.203/
Li, Yafu and Yin, Yongjing and Li, Jing and Zhang, Yue
Findings of the Association for Computational Linguistics: ACL 2022
2579--2590
Neural machine translation (NMT) has obtained significant performance improvement over the recent years. However, NMT models still face various challenges including fragility and lack of style flexibility. Moreover, current methods for instance-level constraints are limited in that they are either constraint-specific or model-specific. To this end, we propose prompt-driven neural machine translation to incorporate prompts for enhancing translation control and enriching flexibility. Empirical results demonstrate the effectiveness of our method in both prompt responding and translation quality. Through human evaluation, we further show the flexibility of prompt control and the efficiency in human-in-the-loop translation.
null
null
10.18653/v1/2022.findings-acl.203
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,128
inproceedings
lu-etal-2022-controlling
On Controlling Fallback Responses for Grounded Dialogue Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.204/
Lu, Hongyuan and Lam, Wai and Cheng, Hong and Meng, Helen
Findings of the Association for Computational Linguistics: ACL 2022
2591--2601
Dialogue agents can leverage external textual knowledge to generate responses of a higher quality. To our best knowledge, most existing works on knowledge grounded dialogue settings assume that the user intention is always answerable. Unfortunately, this is impractical as there is no guarantee that the knowledge retrievers could always retrieve the desired knowledge. Therefore, this is crucial to incorporate fallback responses to respond to unanswerable contexts appropriately while responding to the answerable contexts in an informative manner. We propose a novel framework that automatically generates a control token with the generator to bias the succeeding response towards informativeness for answerable contexts and fallback for unanswerable contexts in an end-to-end manner. Since no existing knowledge grounded dialogue dataset considers this aim, we augment the existing dataset with unanswerable contexts to conduct our experiments. Automatic and human evaluation results indicate that naively incorporating fallback responses with controlled text generation still hurts informativeness for answerable context. In contrast, our proposed framework effectively mitigates this problem while still appropriately presenting fallback responses to unanswerable contexts. Such a framework also reduces the extra burden of the additional classifier and the overheads introduced in the previous works, which operates in a pipeline manner.
null
null
10.18653/v1/2022.findings-acl.204
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,129
inproceedings
ates-etal-2022-craft
{CRAFT}: A Benchmark for Causal Reasoning About Forces and in{T}eractions
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.205/
Ates, Tayfun and Ate{\c{s}}o{\u{g}}lu, M. and Yi{\u{g}}it, {\c{C}}a{\u{g}}atay and Kesen, Ilker and Kobas, Mert and Erdem, Erkut and Erdem, Aykut and Goksun, Tilbe and Yuret, Deniz
Findings of the Association for Computational Linguistics: ACL 2022
2602--2627
Humans are able to perceive, understand and reason about causal events. Developing models with similar physical and causal understanding capabilities is a long-standing goal of artificial intelligence. As a step towards this direction, we introduce CRAFT, a new video question answering dataset that requires causal reasoning about physical forces and object interactions. It contains 58K video and question pairs that are generated from 10K videos from 20 different virtual environments, containing various objects in motion that interact with each other and the scene. Two question categories in CRAFT include previously studied descriptive and counterfactual questions. Additionally, inspired by the Force Dynamics Theory in cognitive linguistics, we introduce a new causal question category that involves understanding the causal interactions between objects through notions like cause, enable, and prevent. Our results show that even though the questions in CRAFT are easy for humans, the tested baseline models, including existing state-of-the-art methods, do not yet deal with the challenges posed in our benchmark.
null
null
10.18653/v1/2022.findings-acl.205
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,130
inproceedings
du-etal-2022-graph
A Graph Enhanced {BERT} Model for Event Prediction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.206/
Du, Li and Ding, Xiao and Zhang, Yue and Liu, Ting and Qin, Bing
Findings of the Association for Computational Linguistics: ACL 2022
2628--2638
Predicting the subsequent event for an existing event context is an important but challenging task, as it requires understanding the underlying relationship between events. Previous methods propose to retrieve relational features from event graph to enhance the modeling of event correlation. However, the sparsity of event graph may restrict the acquisition of relevant graph information, and hence influence the model performance. To address this issue, we consider automatically building of event graph using a BERT model. To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process. Hence, in the test process, the connection relationship for unseen events can be predicted by the structured variable. Results on two event prediction tasks: script event prediction and story ending prediction, show that our approach can outperform state-of-the-art baseline methods.
null
null
10.18653/v1/2022.findings-acl.206
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,131
inproceedings
xu-etal-2022-long
Long Time No See! Open-Domain Conversation with Long-Term Persona Memory
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.207/
Xu, Xinchao and Gou, Zhibin and Wu, Wenquan and Niu, Zheng-Yu and Wu, Hua and Wang, Haifeng and Wang, Shihang
Findings of the Association for Computational Linguistics: ACL 2022
2639--2650
Most of the open-domain dialogue models tend to perform poorly in the setting of long-term human-bot conversations. The possible reason is that they lack the capability of understanding and memorizing long-term dialogue history information. To address this issue, we present a novel task of Long-term Memory Conversation (LeMon) and then build a new dialogue dataset DuLeMon and a dialogue generation framework with Long-Term Memory (LTM) mechanism (called PLATO-LTM). This LTM mechanism enables our system to accurately extract and continuously update long-term persona memory without requiring multiple-session dialogue datasets for model training. To our knowledge, this is the first attempt to conduct real-time dynamic management of persona information of both parties, including the user and the bot. Results on DuLeMon indicate that PLATO-LTM can significantly outperform baselines in terms of long-term dialogue consistency, leading to better dialogue engagingness.
null
null
10.18653/v1/2022.findings-acl.207
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,132
inproceedings
ruzzetti-etal-2022-lacking
Lacking the Embedding of a Word? Look it up into a Traditional Dictionary
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.208/
Ruzzetti, Elena Sofia and Ranaldi, Leonardo and Mastromattei, Michele and Fallucchi, Francesca and Scarpato, Noemi and Zanzotto, Fabio Massimo
Findings of the Association for Computational Linguistics: ACL 2022
2651--2662
Word embeddings are powerful dictionaries, which may easily capture language variations. However, these dictionaries fail to give sense to rare words, which are surprisingly often covered by traditional dictionaries. In this paper, we propose to use definitions retrieved in traditional dictionaries to produce word embeddings for rare words. For this purpose, we introduce two methods: Definition Neural Network (DefiNNet) and Define BERT (DefBERT). In our experiments, DefiNNet and DefBERT significantly outperform state-of-the-art as well as baseline methods devised for producing embeddings of unknown words. In fact, DefiNNet significantly outperforms FastText, which implements a method for the same task-based on n-grams, and DefBERT significantly outperforms the BERT method for OOV words. Then, definitions in traditional dictionaries are useful to build word embeddings for rare words.
null
null
10.18653/v1/2022.findings-acl.208
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,133
inproceedings
bi-etal-2022-mtrec
{MTR}ec: Multi-Task Learning over {BERT} for News Recommendation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.209/
Bi, Qiwei and Li, Jian and Shang, Lifeng and Jiang, Xin and Liu, Qun and Yang, Hanfang
Findings of the Association for Computational Linguistics: ACL 2022
2663--2669
Existing news recommendation methods usually learn news representations solely based on news titles. To sufficiently utilize other fields of news information such as category and entities, some methods treat each field as an additional feature and combine different feature vectors with attentive pooling. With the adoption of large pre-trained models like BERT in news recommendation, the above way to incorporate multi-field information may encounter challenges: the shallow feature encoding to compress the category and entity information is not compatible with the deep BERT encoding. In this paper, we propose a multi-task method to incorporate the multi-field information into BERT, which improves its news encoding capability. Besides, we modify the gradients of auxiliary tasks based on their gradient conflicts with the main task, which further boosts the model performance. Extensive experiments on the MIND news recommendation benchmark show the effectiveness of our approach.
null
null
10.18653/v1/2022.findings-acl.209
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,134
inproceedings
zheng-etal-2022-cross
Cross-domain Named Entity Recognition via Graph Matching
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.210/
Zheng, Junhao and Chen, Haibin and Ma, Qianli
Findings of the Association for Computational Linguistics: ACL 2022
2670--2680
Cross-domain NER is a practical yet challenging problem since the data scarcity in the real-world scenario. A common practice is first to learn a NER model in a rich-resource general domain and then adapt the model to specific domains. Due to the mismatch problem between entity types across domains, the wide knowledge in the general domain can not effectively transfer to the target domain NER model. To this end, we model the label relationship as a probability distribution and construct label graphs in both source and target label spaces. To enhance the contextual representation with label structures, we fuse the label graph into the word embedding output by BERT. By representing label relationships as graphs, we formulate cross-domain NER as a graph matching problem. Furthermore, the proposed method has good applicability with pre-training methods and is potentially capable of other cross-domain prediction tasks. Empirical results on four datasets show that our method outperforms a series of transfer learning, multi-task learning, and few-shot learning methods.
null
null
10.18653/v1/2022.findings-acl.210
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,135
inproceedings
wang-etal-2022-assessing
Assessing Multilingual Fairness in Pre-trained Multimodal Representations
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.211/
Wang, Jialu and Liu, Yang and Wang, Xin
Findings of the Association for Computational Linguistics: ACL 2022
2681--2695
Recently pre-trained multimodal models, such as CLIP, have shown exceptional capabilities towards connecting images and natural language. The textual representations in English can be desirably transferred to multilingualism and support downstream multimodal tasks for different languages. Nevertheless, the principle of multilingual fairness is rarely scrutinized: do multilingual multimodal models treat languages equally? Are their performances biased towards particular languages? To answer these questions, we view language as the fairness recipient and introduce two new fairness notions, multilingual individual fairness and multilingual group fairness, for pre-trained multimodal models. Multilingual individual fairness requires that text snippets expressing similar semantics in different languages connect similarly to images, while multilingual group fairness requires equalized predictive performance across languages. We characterize the extent to which pre-trained multilingual vision-and-language representations are individually fair across languages. However, extensive experiments demonstrate that multilingual representations do not satisfy group fairness: (1) there is a severe multilingual accuracy disparity issue; (2) the errors exhibit biases across languages conditioning the group of people in the images, including race, gender and age.
null
null
10.18653/v1/2022.findings-acl.211
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,136
inproceedings
cheevaprawatdomrong-etal-2022-words
More Than Words: Collocation Retokenization for {L}atent {D}irichlet {A}llocation Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.212/
Cheevaprawatdomrong, Jin and Schofield, Alexandra and Rutherford, Attapol
Findings of the Association for Computational Linguistics: ACL 2022
2696--2704
Traditionally, Latent Dirichlet Allocation (LDA) ingests words in a collection of documents to discover their latent topics using word-document co-occurrences. Previous studies show that representing bigrams collocations in the input can improve topic coherence in English. However, it is unclear how to achieve the best results for languages without marked word boundaries such as Chinese and Thai. Here, we explore the use of retokenization based on chi-squared measures, $t$-statistics, and raw frequency to merge frequent token ngrams into collocations when preparing input to the LDA model. Based on the goodness of fit and the coherence metric, we show that topics trained with merged tokens result in topic keys that are clearer, more coherent, and more effective at distinguishing topics than those of unmerged models.
null
null
10.18653/v1/2022.findings-acl.212
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,137
inproceedings
gokhale-etal-2022-generalized
\textit{Generalized but not Robust?} Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.213/
Gokhale, Tejas and Mishra, Swaroop and Luo, Man and Sachdeva, Bhavdeep and Baral, Chitta
Findings of the Association for Computational Linguistics: ACL 2022
2705--2718
Data modification, either via additional training datasets, data augmentation, debiasing, and dataset filtering, has been proposed as an effective solution for generalizing to out-of-domain (OOD) inputs, in both natural language processing and computer vision literature. However, the effect of data modification on adversarial robustness remains unclear. In this work, we conduct a comprehensive study of common data modification strategies and evaluate not only their in-domain and OOD performance, but also their adversarial robustness (AR).We also present results on a two-dimensional synthetic dataset to visualize the effect of each method on the training distribution. This work serves as an empirical study towards understanding the relationship between generalizing to unseen domains and defending against adversarial perturbations. Our findings suggest that more data (either via additional datasets or data augmentation) benefits both OOD accuracy and AR.However, data filtering (previously shown to improve OOD accuracy on natural language inference) hurts OOD accuracy on other tasks such as question answering and image classification. We provide insights from our experiments to inform future work in this direction.
null
null
10.18653/v1/2022.findings-acl.213
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,138
inproceedings
ye-etal-2022-assist
{ASSIST}: Towards Label Noise-Robust Dialogue State Tracking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.214/
Ye, Fanghua and Feng, Yue and Yilmaz, Emine
Findings of the Association for Computational Linguistics: ACL 2022
2719--2731
The MultiWOZ 2.0 dataset has greatly boosted the research on dialogue state tracking (DST). However, substantial noise has been discovered in its state annotations. Such noise brings about huge challenges for training DST models robustly. Although several refined versions, including MultiWOZ 2.1-2.4, have been published recently, there are still lots of noisy labels, especially in the training set. Besides, it is costly to rectify all the problematic annotations. In this paper, instead of improving the annotation quality further, we propose a general framework, named ASSIST (lAbel noiSe-robuSt dIalogue State Tracking), to train DST models robustly from noisy labels. ASSIST first generates pseudo labels for each sample in the training set by using an auxiliary model trained on a small clean dataset, then puts the generated pseudo labels and vanilla noisy labels together to train the primary model. We show the validity of ASSIST theoretically. Experimental results also demonstrate that ASSIST improves the joint goal accuracy of DST by up to 28.16{\%} on MultiWOZ 2.0 and 8.41{\%} on MultiWOZ 2.4, compared to using only the vanilla noisy labels.
null
null
10.18653/v1/2022.findings-acl.214
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,139
inproceedings
miculicich-henderson-2022-graph
Graph Refinement for Coreference Resolution
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.215/
Miculicich, Lesly and Henderson, James
Findings of the Association for Computational Linguistics: ACL 2022
2732--2742
The state-of-the-art models for coreference resolution are based on independent mention pair-wise decisions. We propose a modelling approach that learns coreference at the document-level and takes global decisions. For this purpose, we model coreference links in a graph structure where the nodes are tokens in the text, and the edges represent the relationship between them. Our model predicts the graph in a non-autoregressive manner, then iteratively refines it based on previous predictions, allowing global dependencies between decisions. The experimental results show improvements over various baselines, reinforcing the hypothesis that document-level information improves conference resolution.
null
null
10.18653/v1/2022.findings-acl.215
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,140
inproceedings
xu-etal-2022-eco
{ECO} v1: Towards Event-Centric Opinion Mining
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.216/
Xu, Ruoxi and Lin, Hongyu and Liao, Meng and Han, Xianpei and Xu, Jin and Tan, Wei and Sun, Yingfei and Sun, Le
Findings of the Association for Computational Linguistics: ACL 2022
2743--2753
Events are considered as the fundamental building blocks of the world. Mining event-centric opinions can benefit decision making, people communication, and social good. Unfortunately, there is little literature addressing event-centric opinion mining, although which significantly diverges from the well-studied entity-centric opinion mining in connotation, structure, and expression. In this paper, we propose and formulate the task of event-centric opinion mining based on event-argument structure and expression categorizing theory. We also benchmark this task by constructing a pioneer corpus and designing a two-step benchmark framework. Experiment results show that event-centric opinion mining is feasible and challenging, and the proposed task, dataset, and baselines are beneficial for future studies.
null
null
10.18653/v1/2022.findings-acl.216
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,141
inproceedings
guo-etal-2022-deep
Deep Reinforcement Learning for Entity Alignment
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.217/
Guo, Lingbing and Han, Yuqiang and Zhang, Qiang and Chen, Huajun
Findings of the Association for Computational Linguistics: ACL 2022
2754--2765
Embedding-based methods have attracted increasing attention in recent entity alignment (EA) studies. Although great promise they can offer, there are still several limitations. The most notable is that they identify the aligned entities based on cosine similarity, ignoring the semantics underlying the embeddings themselves. Furthermore, these methods are shortsighted, heuristically selecting the closest entity as the target and allowing multiple entities to match the same candidate. To address these limitations, we model entity alignment as a sequential decision-making task, in which an agent sequentially decides whether two entities are matched or mismatched based on their representation vectors. The proposed reinforcement learning (RL)-based entity alignment framework can be flexibly adapted to most embedding-based EA methods. The experimental results demonstrate that it consistently advances the performance of several state-of-the-art methods, with a maximum improvement of 31.1{\%} on Hits@1.
null
null
10.18653/v1/2022.findings-acl.217
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,142
inproceedings
chiang-etal-2022-breaking
Breaking Down Multilingual Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.218/
Chiang, Ting-Rui and Chen, Yi-Pei and Yeh, Yi-Ting and Neubig, Graham
Findings of the Association for Computational Linguistics: ACL 2022
2766--2780
While multilingual training is now an essential ingredient in machine translation (MT) systems, recent work has demonstrated that it has different effects in different multilingual settings, such as many-to-one, one-to-many, and many-to-many learning. These training settings expose the encoder and the decoder in a machine translation model with different data distributions. In this paper, we examine how different varieties of multilingual training contribute to learning these two components of the MT model. Specifically, we compare bilingual models with encoders and/or decoders initialized by multilingual training. We show that multilingual training is beneficial to encoders in general, while it only benefits decoders for low-resource languages (LRLs). We further find the important attention heads for each language pair and compare their correlations during inference. Our analysis sheds light on how multilingual translation models work and also enables us to propose methods to improve performance by training with highly related languages. Our many-to-one models for high-resource languages and one-to-many models for LRL outperform the best results reported by Aharoni et al. (2019).
null
null
10.18653/v1/2022.findings-acl.218
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,143
inproceedings
li-etal-2022-mitigating
Mitigating Contradictions in Dialogue Based on Contrastive Learning
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.219/
Li, Weizhao and Kong, Junsheng and Liao, Ben and Cai, Yi
Findings of the Association for Computational Linguistics: ACL 2022
2781--2788
Chatbot models have achieved remarkable progress in recent years but tend to yield contradictory responses. In this paper, we exploit the advantage of contrastive learning technique to mitigate this issue. To endow the model with the ability of discriminating contradictory patterns, we minimize the similarity between the target response and contradiction related negative example. The negative example is generated with learnable latent noise, which receives contradiction related feedback from the pretrained critic. Experimental results show that our method helps to avoid contradictions in response generation while preserving response fluency, outperforming existing methods on both automatic and human evaluation.
null
null
10.18653/v1/2022.findings-acl.219
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,144
inproceedings
qin-etal-2022-elle
{ELLE}: Efficient Lifelong Pre-training for Emerging Data
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.220/
Qin, Yujia and Zhang, Jiajie and Lin, Yankai and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie
Findings of the Association for Computational Linguistics: ACL 2022
2789--2810
Current pre-trained language models (PLM) are typically trained with static data, ignoring that in real-world scenarios, streaming data of various sources may continuously grow. This requires PLMs to integrate the information from all the sources in a lifelong manner. Although this goal could be achieved by exhaustive pre-training on all the existing data, such a process is known to be computationally expensive. To this end, we propose ELLE, aiming at efficient lifelong pre-training for emerging data. Specifically, ELLE consists of (1) function preserved model expansion, which flexibly expands an existing PLM`s width and depth to improve the efficiency of knowledge acquisition; and (2) pre-trained domain prompts, which disentangle the versatile knowledge learned during pre-training and stimulate the proper knowledge for downstream tasks. We experiment ELLE with streaming data from 5 domains on BERT and GPT. The results show the superiority of ELLE over various lifelong learning baselines in both pre-training efficiency and downstream performances. The codes are publicly available at \url{https://github.com/thunlp/ELLE}.
null
null
10.18653/v1/2022.findings-acl.220
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,145
inproceedings
ma-etal-2022-encbp
{E}n{CBP}: A New Benchmark Dataset for Finer-Grained Cultural Background Prediction in {E}nglish
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.221/
Ma, Weicheng and Datta, Samiha and Wang, Lili and Vosoughi, Soroush
Findings of the Association for Computational Linguistics: ACL 2022
2811--2823
While cultural backgrounds have been shown to affect linguistic expressions, existing natural language processing (NLP) research on culture modeling is overly coarse-grained and does not examine cultural differences among speakers of the same language. To address this problem and augment NLP models with cultural background features, we collect, annotate, manually validate, and benchmark EnCBP, a finer-grained news-based cultural background prediction dataset in English. Through language modeling (LM) evaluations and manual analyses, we confirm that there are noticeable differences in linguistic expressions among five English-speaking countries and across four states in the US. Additionally, our evaluations on nine syntactic (CoNLL-2003), semantic (PAWS-Wiki, QNLI, STS-B, and RTE), and psycholinguistic tasks (SST-5, SST-2, Emotion, and Go-Emotions) show that, while introducing cultural background information does not benefit the Go-Emotions task due to text domain conflicts, it noticeably improves deep learning (DL) model performance on other tasks. Our findings strongly support the importance of cultural background modeling to a wide variety of NLP tasks and demonstrate the applicability of EnCBP in culture-related research.
null
null
10.18653/v1/2022.findings-acl.221
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,146
inproceedings
logan-iv-etal-2022-cutting
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.222/
Logan IV, Robert and Balazevic, Ivana and Wallace, Eric and Petroni, Fabio and Singh, Sameer and Riedel, Sebastian
Findings of the Association for Computational Linguistics: ACL 2022
2824--2835
Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning. In this work, we show that finetuning LMs in the few-shot setting can considerably reduce the need for prompt engineering. In fact, one can use null prompts, prompts that contain neither task-specific templates nor training examples, and achieve competitive accuracy to manually-tuned prompts across a wide range of tasks. While finetuning LMs does introduce new parameters for each downstream task, we show that this memory overhead can be substantially reduced: finetuning only the bias terms can achieve comparable or better accuracy than standard finetuning while only updating 0.1{\%} of the parameters. All in all, we recommend finetuning LMs for few-shot learning as it is more accurate, robust to different prompts, and can be made nearly as efficient as using frozen LMs.
null
null
10.18653/v1/2022.findings-acl.222
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,147
inproceedings
anders-etal-2022-ufact
u{FACT}: Unfaithful Alien-Corpora Training for Semantically Consistent Data-to-Text Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.223/
Anders, Tisha and Coca, Alexandru and Byrne, Bill
Findings of the Association for Computational Linguistics: ACL 2022
2836--2841
We propose uFACT (Un-Faithful Alien Corpora Training), a training corpus construction method for data-to-text (d2t) generation models. We show that d2t models trained on uFACT datasets generate utterances which represent the semantic content of the data sources more accurately compared to models trained on the target corpus alone. Our approach is to augment the training set of a given target corpus with alien corpora which have different semantic representations. We show that while it is important to have faithful data from the target corpus, the faithfulness of additional corpora only plays a minor role. Consequently, uFACT datasets can be constructed with large quantities of unfaithful data. We show how uFACT can be leveraged to obtain state-of-the-art results on the WebNLG benchmark using METEOR as our performance metric. Furthermore, we investigate the sensitivity of the generation faithfulness to the training corpus structure using the PARENT metric, and provide a baseline for this metric on the WebNLG (Gardent et al., 2017) benchmark to facilitate comparisons with future work.
null
null
10.18653/v1/2022.findings-acl.223
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,148
inproceedings
shwartz-2022-good
Good Night at 4 pm?! Time Expressions in Different Cultures
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.224/
Shwartz, Vered
Findings of the Association for Computational Linguistics: ACL 2022
2842--2853
We propose the task of culture-specific time expression grounding, i.e. mapping from expressions such as {\textquotedblleft}morning{\textquotedblright} in English or {\textquotedblleft}Manh{\~a}{\textquotedblright} in Portuguese to specific hours in the day. We propose 3 language-agnostic methods, one of which achieves promising results on gold standard annotations that we collected for a small number of languages. We then apply this method to 27 languages and analyze the similarities across languages in the grounding of time expressions.
null
null
10.18653/v1/2022.findings-acl.224
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,149
inproceedings
li-etal-2022-extracting
Extracting Person Names from User Generated Text: Named-Entity Recognition for Combating Human Trafficking
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.225/
Li, Yifei and Nair, Pratheeksha and Pelrine, Kellin and Rabbany, Reihaneh
Findings of the Association for Computational Linguistics: ACL 2022
2854--2868
Online escort advertisement websites are widely used for advertising victims of human trafficking. Domain experts agree that advertising multiple people in the same ad is a strong indicator of trafficking. Thus, extracting person names from the text of these ads can provide valuable clues for further analysis. However, Named-Entity Recognition (NER) on escort ads is challenging because the text can be noisy, colloquial and often lacking proper grammar and punctuation. Most existing state-of-the-art NER models fail to demonstrate satisfactory performance in this task. In this paper, we propose NEAT (Name Extraction Against Trafficking) for extracting person names. It effectively combines classic rule-based and dictionary extractors with a contextualized language model to capture ambiguous names (e.g penny, hazel) and adapts to adversarial changes in the text by expanding its dictionary. NEAT shows 19{\%} improvement on average in the F1 classification score for name extraction compared to previous state-of-the-art in two domain-specific datasets.
null
null
10.18653/v1/2022.findings-acl.225
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,150
inproceedings
niu-etal-2022-onealigner
{O}ne{A}ligner: Zero-shot Cross-lingual Transfer with One Rich-Resource Language Pair for Low-Resource Sentence Retrieval
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.226/
Niu, Tong and Hashimoto, Kazuma and Zhou, Yingbo and Xiong, Caiming
Findings of the Association for Computational Linguistics: ACL 2022
2869--2882
Aligning parallel sentences in multilingual corpora is essential to curating data for downstream applications such as Machine Translation. In this work, we present OneAligner, an alignment model specially designed for sentence retrieval tasks. This model is able to train on only one language pair and transfers, in a cross-lingual fashion, to low-resource language pairs with negligible degradation in performance. When trained with all language pairs of a large-scale parallel multilingual corpus (OPUS-100), this model achieves the state-of-the-art result on the Tateoba dataset, outperforming an equally-sized previous model by 8.0 points in accuracy while using less than 0.6{\%} of their parallel data. When finetuned on a single rich-resource language pair, be it English-centered or not, our model is able to match the performance of the ones finetuned on all language pairs under the same data budget with less than 2.0 points decrease in accuracy. Furthermore, with the same setup, scaling up the number of rich-resource language pairs monotonically improves the performance, reaching a minimum of 0.4 points discrepancy in accuracy, making it less mandatory to collect any low-resource parallel data. Finally, we conclude through empirical results and analyses that the performance of the sentence alignment task depends mostly on the monolingual and parallel data size, up to a certain size threshold, rather than on what language pairs are used for training or evaluation.
null
null
10.18653/v1/2022.findings-acl.226
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,151
inproceedings
khalid-etal-2022-suum
Suum Cuique: Studying Bias in Taboo Detection with a Community Perspective
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.227/
Khalid, Osama and Rusert, Jonathan and Srinivasan, Padmini
Findings of the Association for Computational Linguistics: ACL 2022
2883--2896
Prior research has discussed and illustrated the need to consider linguistic norms at the community level when studying taboo (hateful/offensive/toxic etc.) language. However, a methodology for doing so, that is firmly founded on community language norms is still largely absent. This can lead both to biases in taboo text classification and limitations in our understanding of the causes of bias. We propose a method to study bias in taboo classification and annotation where a community perspective is front and center. This is accomplished by using special classifiers tuned for each community`s language. In essence, these classifiers represent community level language norms. We use these to study bias and find, for example, biases are largest against African Americans (7/10 datasets and all 3 classifiers examined). In contrast to previous papers we also study other communities and find, for example, strong biases against South Asians. In a small scale user study we illustrate our key idea which is that common utterances, i.e., those with high alignment scores with a community (community classifier confidence scores) are unlikely to be regarded taboo. Annotators who are community members contradict taboo classification decisions and annotations in a majority of instances. This paper is a significant step toward reducing false positive taboo decisions that over time harm minority communities.
null
null
10.18653/v1/2022.findings-acl.227
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,152
inproceedings
inan-etal-2022-modeling
Modeling Intensification for Sign Language Generation: A Computational Approach
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.228/
Inan, Mert and Zhong, Yang and Hassan, Sabit and Quandt, Lorna and Alikhani, Malihe
Findings of the Association for Computational Linguistics: ACL 2022
2897--2911
End-to-end sign language generation models do not accurately represent the prosody in sign language. A lack of temporal and spatial variations leads to poor-quality generated presentations that confuse human interpreters. In this paper, we aim to improve the prosody in generated sign languages by modeling intensification in a data-driven manner. We present different strategies grounded in linguistics of sign language that inform how intensity modifiers can be represented in gloss annotations. To employ our strategies, we first annotate a subset of the benchmark PHOENIX-14T, a German Sign Language dataset, with different levels of intensification. We then use a supervised intensity tagger to extend the annotated dataset and obtain labels for the remaining portion of it. This enhanced dataset is then used to train state-of-the-art transformer models for sign language generation. We find that our efforts in intensification modeling yield better results when evaluated with automatic metrics. Human evaluation also indicates a higher preference of the videos generated using our model.
null
null
10.18653/v1/2022.findings-acl.228
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,153
inproceedings
qian-etal-2022-controllable
Controllable Natural Language Generation with Contrastive Prefixes
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.229/
Qian, Jing and Dong, Li and Shen, Yelong and Wei, Furu and Chen, Weizhu
Findings of the Association for Computational Linguistics: ACL 2022
2912--2924
To guide the generation of large pretrained language models (LM), previous work has focused on directly fine-tuning the language model or utilizing an attribute discriminator. In this work, we propose a novel lightweight framework for controllable GPT2 generation, which utilizes a set of small attribute-specific vectors, called prefixes (Li and Liang, 2021), to steer natural language generation. Different from Li and Liang (2021), where each prefix is trained independently, we take the relationship among prefixes into consideration and train multiple prefixes simultaneously. We propose a novel supervised method and also an unsupervised method to train the prefixes for single-aspect control while the combination of these two methods can achieve multi-aspect control. Experimental results on both single-aspect and multi-aspect control show that our methods can guide generation towards the desired attributes while keeping high linguistic quality.
null
null
10.18653/v1/2022.findings-acl.229
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,154
inproceedings
krasner-etal-2022-revisiting
Revisiting the Effects of Leakage on Dependency Parsing
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.230/
Krasner, Nathaniel and Wanner, Miriam and Anastasopoulos, Antonios
Findings of the Association for Computational Linguistics: ACL 2022
2925--2934
Recent work by S{\o}gaard (2020) showed that, treebank size aside, overlap between training and test graphs (termed \textit{leakage}) explains more of the observed variation in dependency parsing performance than other explanations. In this work we revisit this claim, testing it on more models and languages. We find that it only holds for zero-shot cross-lingual settings. We then propose a more fine-grained measure of such leakage which, unlike the original measure, not only explains but also correlates with observed performance variation. Code and data are available here: \url{https://github.com/miriamwanner/reu-nlp-project}
null
null
10.18653/v1/2022.findings-acl.230
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,155
inproceedings
panthaplackel-etal-2022-learning
Learning to Describe Solutions for Bug Reports Based on Developer Discussions
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.231/
Panthaplackel, Sheena and Li, Junyi Jessy and Gligoric, Milos and Mooney, Ray
Findings of the Association for Computational Linguistics: ACL 2022
2935--2952
When a software bug is reported, developers engage in a discussion to collaboratively resolve it. While the solution is likely formulated within the discussion, it is often buried in a large amount of text, making it difficult to comprehend and delaying its implementation. To expedite bug resolution, we propose generating a concise natural language description of the solution by synthesizing relevant content within the discussion, which encompasses both natural language and source code. We build a corpus for this task using a novel technique for obtaining noisy supervision from repository changes linked to bug reports, with which we establish benchmarks. We also design two systems for generating a description during an ongoing discussion by classifying when sufficient context for performing the task emerges in real-time. With automated and human evaluation, we find this task to form an ideal testbed for complex reasoning in long, bimodal dialogue context.
null
null
10.18653/v1/2022.findings-acl.231
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,156
inproceedings
le-etal-2022-perturbations
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.232/
Le, Thai and Lee, Jooyoung and Yen, Kevin and Hu, Yifan and Lee, Dongwon
Findings of the Association for Computational Linguistics: ACL 2022
2953--2965
We proposes a novel algorithm, ANTHRO, that inductively extracts over 600K human-written text perturbations in the wild and leverages them for realistic adversarial attack. Unlike existing character-based attacks which often deductively hypothesize a set of manipulation strategies, our work is grounded on actual observations from real-world texts. We find that adversarial texts generated by ANTHRO achieve the best trade-off between (1) attack success rate, (2) semantic preservation of the original text, and (3) stealthiness{--}i.e. indistinguishable from human writings hence harder to be flagged as suspicious. Specifically, our attacks accomplished around 83{\%} and 91{\%} attack success rates on BERT and RoBERTa, respectively. Moreover, it outperformed the TextBugger baseline with an increase of 50{\%} and 40{\%} in terms of semantic preservation and stealthiness when evaluated by both layperson and professional human workers. ANTHRO can further enhance a BERT classifier`s performance in understanding different variations of human-written toxic texts via adversarial training when compared to the Perspective API.
null
null
10.18653/v1/2022.findings-acl.232
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,157
inproceedings
yue-etal-2022-improving
Improving {C}hinese Grammatical Error Detection via Data augmentation by Conditional Error Generation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.233/
Yue, Tianchi and Liu, Shulin and Cai, Huihui and Yang, Tao and Song, Shengkang and Yu, TingHao
Findings of the Association for Computational Linguistics: ACL 2022
2966--2975
Chinese Grammatical Error Detection(CGED) aims at detecting grammatical errors in Chinese texts. One of the main challenges for CGED is the lack of annotated data. To alleviate this problem, previous studies proposed various methods to automatically generate more training samples, which can be roughly categorized into rule-based methods and model-based methods. The rule-based methods construct erroneous sentences by directly introducing noises into original sentences. However, the introduced noises are usually context-independent, which are quite different from those made by humans. The model-based methods utilize generative models to imitate human errors. The generative model may bring too many changes to the original sentences and generate semantically ambiguous sentences, so it is difficult to detect grammatical errors in these generated sentences. In addition, generated sentences may be error-free and thus become noisy data. To handle these problems, we propose CNEG, a novel Conditional Non-Autoregressive Error Generation model for generating Chinese grammatical errors. Specifically, in order to generate a context-dependent error, we first mask a span in a correct text, then predict an erroneous span conditioned on both the masked text and the correct span. Furthermore, we filter out error-free spans by measuring their perplexities in the original sentences. Experimental results show that our proposed method achieves better performance than all compared data augmentation methods on the CGED-2018 and CGED-2020 benchmarks.
null
null
10.18653/v1/2022.findings-acl.233
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,158
inproceedings
liang-etal-2022-modular
Modular and Parameter-Efficient Multimodal Fusion with Prompting
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.234/
Liang, Sheng and Zhao, Mengjie and Schuetze, Hinrich
Findings of the Association for Computational Linguistics: ACL 2022
2976--2985
Recent research has made impressive progress in large-scale multimodal pre-training. In the context of the rapid growth of model size, it is necessary to seek efficient and flexible methods other than finetuning. In this paper, we propose to use prompt vectors to align the modalities. Our method achieves comparable performance to several other multimodal fusion methods in low-resource settings. We further show that our method is modular and parameter-efficient for processing tasks involving two or more data modalities.
null
null
10.18653/v1/2022.findings-acl.234
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,159
inproceedings
chen-etal-2022-synchronous
Synchronous Refinement for Neural Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.235/
Chen, Kehai and Utiyama, Masao and Sumita, Eiichiro and Wang, Rui and Zhang, Min
Findings of the Association for Computational Linguistics: ACL 2022
2986--2996
Machine translation typically adopts an encoder-to-decoder framework, in which the decoder generates the target sentence word-by-word in an auto-regressive manner. However, the auto-regressive decoder faces a deep-rooted $one$-$pass$ issue whereby each generated word is considered as one element of the final output regardless of whether it is correct or not. These generated wrong words further constitute the target historical context to affect the generation of subsequent target words. This paper proposes a novel synchronous refinement method to revise potential errors in the generated words by considering part of the target future context. Particularly, the proposed approach allows the auto-regressive decoder to refine the previously generated target words and generate the next target word synchronously. The experimental results on three widely-used machine translation tasks demonstrated the effectiveness of the proposed approach.
null
null
10.18653/v1/2022.findings-acl.235
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,160
inproceedings
zheng-etal-2022-hie
{HIE}-{SQL}: History Information Enhanced Network for Context-Dependent Text-to-{SQL} Semantic Parsing
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.236/
Zheng, Yanzhao and Wang, Haibin and Dong, Baohua and Wang, Xingjun and Li, Changshan
Findings of the Association for Computational Linguistics: ACL 2022
2997--3007
Recently, context-dependent text-to-SQL semantic parsing which translates natural language into SQL in an interaction process has attracted a lot of attentions. Previous works leverage context dependence information either from interaction history utterances or previous predicted queries but fail in taking advantage of both of them since of the mismatch between the natural language and logic-form SQL. In this work, we propose a History Information Enhanced text-to-SQL model (HIE-SQL) to exploit context dependence information from both history utterances and the last predicted SQL query. In view of the mismatch, we treat natural language and SQL as two modalities and propose a bimodal pre-trained model to bridge the gap between them. Besides, we design a schema-linking graph to enhance connections from utterances and the SQL query to database schema. We show our history information enhanced methods improve the performance of HIE-SQL by a significant margin, which achieves new state-of-the-art results on two context-dependent text-to-SQL benchmarks, the SparC and CoSQL datasets, at the writing time.
null
null
10.18653/v1/2022.findings-acl.236
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,161
inproceedings
liu-etal-2022-craspell
{CRAS}pell: A Contextual Typo Robust Approach to Improve {C}hinese Spelling Correction
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.237/
Liu, Shulin and Song, Shengkang and Yue, Tianchi and Yang, Tao and Cai, Huihui and Yu, TingHao and Sun, Shengli
Findings of the Association for Computational Linguistics: ACL 2022
3008--3018
Recently, Bert-based models have dominated the research of Chinese spelling correction (CSC). These methods have two limitations: (1) they have poor performance on multi-typo texts. In such texts, the context of each typo contains at least one misspelled character, which brings noise information. Such noisy context leads to the declining performance on multi-typo texts. (2) they tend to overcorrect valid expressions to more frequent expressions due to the masked token recovering task of Bert. We attempt to address these limitations in this paper. To make our model robust to contextual noise brought by typos, our approach first constructs a noisy context for each training sample. Then the correction model is forced to yield similar outputs based on the noisy and original contexts. Moreover, to address the overcorrection problem, copy mechanism is incorporated to encourage our model to prefer to choose the input character when the miscorrected and input character are both valid according to the given context. Experiments are conducted on widely used benchmarks. Our model achieves superior performance against state-of-the-art methods by a remarkable gain.
null
null
10.18653/v1/2022.findings-acl.237
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,162
inproceedings
zhang-feng-2022-gaussian
{G}aussian Multi-head Attention for Simultaneous Machine Translation
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.238/
Zhang, Shaolei and Feng, Yang
Findings of the Association for Computational Linguistics: ACL 2022
3019--3030
Simultaneous machine translation (SiMT) outputs translation while receiving the streaming source inputs, and hence needs a policy to determine where to start translating. The alignment between target and source words often implies the most informative source word for each target word, and hence provides the unified control over translation quality and latency, but unfortunately the existing SiMT methods do not explicitly model the alignment to perform the control. In this paper, we propose Gaussian Multi-head Attention (GMA) to develop a new SiMT policy by modeling alignment and translation in a unified manner. For SiMT policy, GMA models the aligned source position of each target word, and accordingly waits until its aligned position to start translating. To integrate the learning of alignment into the translation model, a Gaussian distribution centered on predicted aligned position is introduced as an alignment-related prior, which cooperates with translation-related soft attention to determine the final attention. Experiments on En-Vi and De-En tasks show that our method outperforms strong baselines on the trade-off between translation and latency.
null
null
10.18653/v1/2022.findings-acl.238
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,163
inproceedings
waldis-etal-2022-composing
Composing Structure-Aware Batches for Pairwise Sentence Classification
Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.findings-acl.239/
Waldis, Andreas and Beck, Tilman and Gurevych, Iryna
Findings of the Association for Computational Linguistics: ACL 2022
3031--3045
Identifying the relation between two sentences requires datasets with pairwise annotations. In many cases, these datasets contain instances that are annotated multiple times as part of different pairs. They constitute a structure that contains additional helpful information about the inter-relatedness of the text instances based on the annotations. This paper investigates how this kind of structural dataset information can be exploited during training. We propose three batch composition strategies to incorporate such information and measure their performance over 14 heterogeneous pairwise sentence classification tasks. Our results show statistically significant improvements (up to 3.9{\%}) - independent of the pre-trained language model - for most tasks compared to baselines that follow a standard training procedure. Further, we see that even this baseline procedure can profit from having such structural information in a low-resource setting.
null
null
10.18653/v1/2022.findings-acl.239
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,164