entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
anuchitanukul-ive-2022-surf
{SURF}: Semantic-level Unsupervised Reward Function for Machine Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.334/
Anuchitanukul, Atijit and Ive, Julia
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4508--4522
The performance of Reinforcement Learning (RL) for natural language tasks including Machine Translation (MT) is crucially dependent on the reward formulation. This is due to the intrinsic difficulty of the task in the high-dimensional discrete action space as well as the sparseness of the standard reward functions defined for limited set of ground-truth sequences biased towards singular lexical choices. To address this issue, we formulate SURF, a maximally dense semantic-level unsupervised reward function which mimics human evaluation by considering both sentence fluency and semantic similarity. We demonstrate the strong potential of SURF to leverage a family of Actor-Critic Transformer-based Architectures with synchronous and asynchronous multi-agent variants. To tackle the problem of large action-state spaces, each agent is equipped with unique exploration strategies, promoting diversity during its exploration of the hypothesis space. When BLEU scores are compared, our dense unsupervised reward outperforms the standard sparse reward by 2{\%} on average for in- and out-of-domain settings.
null
null
10.18653/v1/2022.naacl-main.334
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,033
inproceedings
garcia-etal-2022-disentangling
Disentangling Categorization in Multi-agent Emergent Communication
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.335/
Garcia, Washington and Clouse, Hamilton and Butler, Kevin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4523--4540
The emergence of language between artificial agents is a recent focus of computational linguistics, as it offers a synthetic substrate for reasoning about human language evolution. From the perspective of cognitive science, sophisticated categorization in humans is thought to enable reasoning about novel observations, and thus compose old information to describe new phenomena. Unfortunately, the literature to date has not managed to isolate the effect of categorization power in artificial agents on their inter-communication ability, particularly on novel, unseen objects. In this work, we propose the use of disentangled representations from representation learning to quantify the categorization power of agents, enabling a differential analysis between combinations of heterogeneous systems, e.g., pairs of agents which learn to communicate despite mismatched concept realization. Through this approach, we observe that agent heterogeneity can cut signaling accuracy by up to 40{\%}, despite encouraging compositionality in the artificial language. We conclude that the reasoning process of agents plays a key role in their communication, with unexpected benefits arising from their mixing, such as better language compositionality.
null
null
10.18653/v1/2022.naacl-main.335
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,034
inproceedings
gupta-etal-2022-show
Show, Don`t Tell: Demonstrations Outperform Descriptions for Schema-Guided Task-Oriented Dialogue
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.336/
Gupta, Raghav and Lee, Harrison and Zhao, Jeffrey and Cao, Yuan and Rastogi, Abhinav and Wu, Yonghui
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4541--4549
Building universal dialogue systems that operate across multiple domains/APIs and generalize to new ones with minimal overhead is a critical challenge. Recent works have leveraged natural language descriptions of schema elements to enable such systems; however, descriptions only indirectly convey schema semantics. In this work, we propose Show, Don`t Tell, which prompts seq2seq models with a labeled example dialogue to show the semantics of schema elements rather than tell the model through descriptions. While requiring similar effort from service developers as generating descriptions, we show that using short examples as schema representations with large language models results in state-of-the-art performance on two popular dialogue state tracking benchmarks designed to measure zero-shot generalization - the Schema-Guided Dialogue dataset and the MultiWOZ leave-one-out benchmark.
null
null
10.18653/v1/2022.naacl-main.336
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,035
inproceedings
porada-etal-2022-pre
Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.337/
Porada, Ian and Sordoni, Alessandro and Cheung, Jackie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4550--4557
Transformer models pre-trained with a masked-language-modeling objective (e.g., BERT) encode commonsense knowledge as evidenced by behavioral probes; however, the extent to which this knowledge is acquired by systematic inference over the semantics of the pre-training corpora is an open question. To answer this question, we selectively inject verbalized knowledge into the pre-training minibatches of BERT and evaluate how well the model generalizes to supported inferences after pre-training on the injected knowledge. We find generalization does not improve over the course of pre-training BERT from scratch, suggesting that commonsense knowledge is acquired from surface-level, co-occurrence patterns rather than induced, systematic reasoning.
null
null
10.18653/v1/2022.naacl-main.337
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,036
inproceedings
burdick-etal-2022-using
Using Paraphrases to Study Properties of Contextual Embeddings
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.338/
Burdick, Laura and Kummerfeld, Jonathan K. and Mihalcea, Rada
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4558--4568
We use paraphrases as a unique source of data to analyze contextualized embeddings, with a particular focus on BERT. Because paraphrases naturally encode consistent word and phrase semantics, they provide a unique lens for investigating properties of embeddings. Using the Paraphrase Database`s alignments, we study words within paraphrases as well as phrase representations. We find that contextual embeddings effectively handle polysemous words, but give synonyms surprisingly different representations in many cases. We confirm previous findings that BERT is sensitive to word order, but find slightly different patterns than prior work in terms of the level of contextualization across BERT`s layers.
null
null
10.18653/v1/2022.naacl-main.338
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,037
inproceedings
wang-etal-2022-measure
Measure and Improve Robustness in {NLP} Models: A Survey
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.339/
Wang, Xuezhi and Wang, Haohan and Yang, Diyi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4569--4586
As NLP models achieved state-of-the-art performances over benchmarks and gained wide applications, it has been increasingly important to ensure the safe deployment of these models in the real world, e.g., making sure the models are robust against unseen or challenging scenarios. Despite robustness being an increasingly studied topic, it has been separately explored in applications like vision and NLP, with various definitions, evaluation and mitigation strategies in multiple lines of research. In this paper, we aim to provide a unifying survey of how to define, measure and improve robustness in NLP. We first connect multiple definitions of robustness, then unify various lines of work on identifying robustness failures and evaluating models' robustness. Correspondingly, we present mitigation strategies that are data-driven, model-driven, and inductive-prior-based, with a more systematic view of how to effectively improve robustness in NLP models. Finally, we conclude by outlining open challenges and future directions to motivate further research in this area.
null
null
10.18653/v1/2022.naacl-main.339
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,038
inproceedings
croce-etal-2022-learning
Learning to Generate Examples for Semantic Processing Tasks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.340/
Croce, Danilo and Filice, Simone and Castellucci, Giuseppe and Basili, Roberto
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4587--4601
Even if recent Transformer-based architectures, such as BERT, achieved impressive results in semantic processing tasks, their fine-tuning stage still requires large scale training resources. Usually, Data Augmentation (DA) techniques can help to deal with low resource settings. In Text Classification tasks, the objective of DA is the generation of well-formed sentences that i) represent the desired task category and ii) are novel with respect to existing sentences. In this paper, we propose a neural approach to automatically learn to generate new examples using a pre-trained sequence-to-sequence model. We first learn a task-oriented similarity function that we use to pair similar examples. Then, we use these example pairs to train a model to generate examples. Experiments in low resource settings show that augmenting the training material with the proposed strategy systematically improves the results on text classification and natural language inference tasks by up to 10{\%} accuracy, outperforming existing DA approaches.
null
null
10.18653/v1/2022.naacl-main.340
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,039
inproceedings
west-etal-2022-symbolic
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.341/
West, Peter and Bhagavatula, Chandra and Hessel, Jack and Hwang, Jena and Jiang, Liwei and Le Bras, Ronan and Lu, Ximing and Welleck, Sean and Choi, Yejin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4602--4625
The common practice for training commonsense models has gone from{--}human{--}to{--}corpus{--}to{--}machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, from{--}machine{--}to{--}corpus{--}to{--}machine: general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation (Hinton et al. 2015), our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolically{--}as text{--}in addition to the neural model. We distill only one aspect{--}the commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill high-quality causal commonsense from GPT-3, a general language model. Empirical results demonstrate that, for the first time, a human-authored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria: quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model`s commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and will share our new symbolic knowledge graph and commonsense models.
null
null
10.18653/v1/2022.naacl-main.341
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,040
inproceedings
josifoski-etal-2022-genie
{G}en{IE}: Generative Information Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.342/
Josifoski, Martin and De Cao, Nicola and Peyrard, Maxime and Petroni, Fabio and West, Robert
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4626--4643
Structured and grounded representation of text is typically formalized by closed information extraction, the problem of extracting an exhaustive set of (subject, relation, object) triplets that are consistent with a predefined set of entities and relations from a knowledge base schema. Most existing works are pipelines prone to error accumulation, and all approaches are only applicable to unrealistically small numbers of entities and relations. We introduce GenIE (generative information extraction), the first end-to-end autoregressive formulation of closed information extraction. GenIE naturally exploits the language knowledge from the pre-trained transformer by autoregressively generating relations and entities in textual form. Thanks to a new bi-level constrained generation strategy, only triplets consistent with the predefined knowledge base schema are produced. Our experiments show that GenIE is state-of-the-art on closed information extraction, generalizes from fewer training data points than baselines, and scales to a previously unmanageable number of entities and relations. With this work, closed information extraction becomes practical in realistic scenarios, providing new opportunities for downstream tasks. Finally, this work paves the way towards a unified end-to-end approach to the core tasks of information extraction.
null
null
10.18653/v1/2022.naacl-main.342
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,041
inproceedings
agarwal-etal-2022-entity
Entity Linking via Explicit Mention-Mention Coreference Modeling
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.343/
Agarwal, Dhruv and Angell, Rico and Monath, Nicholas and McCallum, Andrew
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4644--4658
Learning representations of entity mentions is a core component of modern entity linking systems for both candidate generation and making linking predictions. In this paper, we present and empirically analyze a novel training approach for learning mention and entity representations that is based on building minimum spanning arborescences (i.e., directed spanning trees) over mentions and entities across documents to explicitly model mention coreference relationships. We demonstrate the efficacy of our approach by showing significant improvements in both candidate generation recall and linking accuracy on the Zero-Shot Entity Linking dataset and MedMentions, the largest publicly available biomedical dataset. In addition, we show that our improvements in candidate generation yield higher quality re-ranking models downstream, setting a new SOTA result in linking accuracy on MedMentions. Finally, we demonstrate that our improved mention representations are also effective for the discovery of new entities via cross-document coreference.
null
null
10.18653/v1/2022.naacl-main.343
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,042
inproceedings
xu-etal-2022-massive
Massive-scale Decoding for Text Generation using Lattices
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.344/
Xu, Jiacheng and Jonnalagadda, Siddhartha and Durrett, Greg
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4659--4676
Conditional neural text generation models generate high-quality outputs, but often concentrate around a mode when what we really want is a diverse set of options. We present a search algorithm to construct lattices encoding a massive number of generation options. First, we restructure decoding as a best-first search, which explores the space differently than beam search and improves efficiency by avoiding pruning paths. Second, we revisit the idea of hypothesis recombination: we can identify pairs of similar generation candidates during search and merge them as an approximation. On both summarization and machine translation, we show that our algorithm encodes thousands of diverse options that remain grammatical and high-quality into one lattice. This algorithm provides a foundation for building downstream generation applications on top of massive-scale diverse outputs.
null
null
10.18653/v1/2022.naacl-main.344
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,043
inproceedings
sanagavarapu-etal-2022-disentangling
Disentangling Indirect Answers to Yes-No Questions in Real Conversations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.345/
Sanagavarapu, Krishna and Singaraju, Jathin and Kakileti, Anusha and Kaza, Anirudh and Mathews, Aaron and Li, Helen and Brito, Nathan and Blanco, Eduardo
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4677--4695
In this paper, we explore the task of determining indirect answers to yes-no questions in real conversations. We work with transcripts of phone conversations in the Switchboard Dialog Act (SwDA) corpus and create SwDA-IndirectAnswers (SwDA-IA), a subset of SwDA consisting of all conversations containing a yes-no question with an indirect answer. We annotate the underlying direct answers to the yes-no questions (yes, probably yes, middle, probably no, or no). We show that doing so requires taking into account conversation context: the indirect answer alone is insufficient to determine the ground truth. Experimental results also show that taking into account context is beneficial. More importantly, our results demonstrate that existing corpora with synthetic indirect answers to yes-no questions are not beneficial when working with real conversations. Our best models outperform the majority baseline by a substantial margin, but the task remains a challenge (F1: 0.46).
null
null
10.18653/v1/2022.naacl-main.345
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,044
inproceedings
li-etal-2022-quantifying
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.346/
Li, Belinda and Yu, Jane and Khabsa, Madian and Zettlemoyer, Luke and Halevy, Alon and Andreas, Jacob
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4696--4715
When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict the eventual performance of the model? In NLP, systematic features of LM generalization to individual examples are well characterized, but systematic aspects of LM adaptability to new tasks are not nearly as well understood. We present a large-scale empirical study of the features and limits of LM adaptability using a new benchmark, TaskBench500, built from 500 procedurally generated sequence modeling tasks. These tasks combine core aspects of language processing, including lexical semantics, sequence processing, memorization, logical reasoning, and world knowledge. Using TaskBench500, we evaluate three facets of adaptability, finding that: (1) adaptation procedures differ dramatically in their ability to memorize small datasets; (2) within a subset of task types, adaptation procedures exhibit compositional adaptability to complex tasks; and (3) failure to match training label distributions is explained by mismatches in the intrinsic difficulty of predicting individual labels. Our experiments show that adaptability to new tasks, like generalization to new examples, can be systematically described and understood, and we conclude with a discussion of additional aspects of adaptability that could be studied using the new benchmark.
null
null
10.18653/v1/2022.naacl-main.346
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,045
inproceedings
sen-etal-2022-counterfactually
Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.347/
Sen, Indira and Samory, Mattia and Wagner, Claudia and Augenstein, Isabelle
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4716--4726
Counterfactually Augmented Data (CAD) aims to improve out-of-domain generalizability, an indicator of model robustness. The improvement is credited to promoting core features of the construct over spurious artifacts that happen to correlate with it. Yet, over-relying on core features may lead to unintended model bias. Especially, construct-driven CAD{---}perturbations of core features{---}may induce models to ignore the context in which core features are used. Here, we test models for sexism and hate speech detection on challenging data: non-hate and non-sexist usage of identity and gendered terms. On these hard cases, models trained on CAD, especially construct-driven CAD, show higher false positive rates than models trained on the original, unperturbed data. Using a diverse set of CAD{---}construct-driven and construct-agnostic{---}reduces such unintended bias.
null
null
10.18653/v1/2022.naacl-main.347
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,046
inproceedings
lyu-etal-2022-study
A Study of the Attention Abnormality in Trojaned {BERT}s
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.348/
Lyu, Weimin and Zheng, Songzhu and Ma, Tengfei and Chen, Chao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4727--4741
Trojan attacks raise serious security concerns. In this paper, we investigate the underlying mechanism of Trojaned BERT models. We observe the attention focus drifting behavior of Trojaned models, i.e., when encountering an poisoned input, the trigger token hijacks the attention focus regardless of the context. We provide a thorough qualitative and quantitative analysis of this phenomenon, revealing insights into the Trojan mechanism. Based on the observation, we propose an attention-based Trojan detector to distinguish Trojaned models from clean ones. To the best of our knowledge, we are the first to analyze the Trojan mechanism and develop a Trojan detector based on the transformer`s attention.
null
null
10.18653/v1/2022.naacl-main.348
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,047
inproceedings
zhao-etal-2022-epida
{EP}i{DA}: An Easy Plug-in Data Augmentation Framework for High Performance Text Classification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.349/
Zhao, Minyi and Zhang, Lu and Xu, Yi and Ding, Jiandong and Guan, Jihong and Zhou, Shuigeng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4742--4752
Recent works have empirically shown the effectiveness of data augmentation (DA) in NLP tasks, especially for those suffering from data scarcity. Intuitively, given the size of generated data, their diversity and quality are crucial to the performance of targeted tasks. However, to the best of our knowledge, most existing methods consider only either the diversity or the quality of augmented data, thus cannot fully mine the potential of DA for NLP. In this paper, we present an easy and plug-in data augmentation framework EPiDA to support effective text classification. EPiDA employs two mechanisms: relative entropy maximization (REM) and conditional entropy minimization (CEM) to control data generation, where REM is designed to enhance the diversity of augmented data while CEM is exploited to ensure their semantic consistency. EPiDA can support efficient and continuous data generation for effective classifier training. Extensive experiments show that EPiDA outperforms existing SOTA methods in most cases, though not using any agent networks or pre-trained generation networks, and it works well with various DA algorithms and classification models.
null
null
10.18653/v1/2022.naacl-main.349
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,048
inproceedings
srikanth-rudinger-2022-partial
Partial-input baselines show that {NLI} models can ignore context, but they don`t.
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.350/
Srikanth, Neha and Rudinger, Rachel
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4753--4763
When strong partial-input baselines reveal artifacts in crowdsourced NLI datasets, the performance of full-input models trained on such datasets is often dismissed as reliance on spurious correlations. We investigate whether state-of-the-art NLI models are capable of overriding default inferences made by a partial-input baseline. We introduce an evaluation set of 600 examples consisting of perturbed premises to examine a RoBERTa model`s sensitivity to edited contexts. Our results indicate that NLI models are still capable of learning to condition on context{---}a necessary component of inferential reasoning{---}despite being trained on artifact-ridden datasets.
null
null
10.18653/v1/2022.naacl-main.350
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,049
inproceedings
jin-etal-2022-lifelong-pretraining
Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.351/
Jin, Xisen and Zhang, Dejiao and Zhu, Henghui and Xiao, Wei and Li, Shang-Wen and Wei, Xiaokai and Arnold, Andrew and Ren, Xiang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4764--4780
Pretrained language models (PTLMs) are typically learned over a large, static corpus and further fine-tuned for various downstream tasks. However, when deployed in the real world, a PTLM-based model must deal with data distributions that deviates from what the PTLM was initially trained on. In this paper, we study a lifelong language model pretraining challenge where a PTLM is continually updated so as to adapt to emerging data. Over a domain-incremental research paper stream and a chronologically-ordered tweet stream, we incrementally pretrain a PTLM with different continual learning algorithms, and keep track of the downstream task performance (after fine-tuning). We evaluate PTLM`s ability to adapt to new corpora while retaining learned knowledge in earlier corpora. Our experiments show distillation-based approaches to be most effective in retaining downstream performance in earlier domains. The algorithms also improve knowledge transfer, allowing models to achieve better downstream performance over latest data, and improve temporal generalization when distribution gaps exist between training and evaluation because of time. We believe our problem formulation, methods, and analysis will inspire future studies towards continual pretraining of language models.
null
null
10.18653/v1/2022.naacl-main.351
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,050
inproceedings
cai-etal-2022-learning
Learning as Conversation: Dialogue Systems Reinforced for Information Acquisition
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.352/
Cai, Pengshan and Wan, Hui and Liu, Fei and Yu, Mo and Yu, Hong and Joshi, Sachindra
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4781--4796
We propose novel AI-empowered chat bots for learning as conversation where a user does not read a passage but gains information and knowledge through conversation with a teacher bot. Our information acquisition-oriented dialogue system employs a novel adaptation of reinforced self-play so that the system can be transferred to various domains without in-domain dialogue data, and can carry out conversations both informative and attentive to users.
null
null
10.18653/v1/2022.naacl-main.352
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,051
inproceedings
yang-etal-2022-dynamic
Dynamic Programming in Rank Space: Scaling Structured Inference with Low-Rank {HMM}s and {PCFG}s
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.353/
Yang, Songlin and Liu, Wei and Tu, Kewei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4797--4809
Hidden Markov Models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) are widely used structured models, both of which can be represented as factor graph grammars (FGGs), a powerful formalism capable of describing a wide range of models. Recent research found it beneficial to use large state spaces for HMMs and PCFGs. However, inference with large state spaces is computationally demanding, especially for PCFGs. To tackle this challenge, we leverage tensor rank decomposition (aka. CPD) to decrease inference computational complexities for a subset of FGGs subsuming HMMs and PCFGs. We apply CPD on the factors of an FGG and then construct a new FGG defined in the rank space. Inference with the new FGG produces the same result but has a lower time complexity when the rank size is smaller than the state size. We conduct experiments on HMM language modeling and unsupervised PCFG parsing, showing better performance than previous work. Our code is publicly available at \url{https://github.com/VPeterV/RankSpace-Models}.
null
null
10.18653/v1/2022.naacl-main.353
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,052
inproceedings
thorn-jakobsen-rogers-2022-factors
What Factors Should Paper-Reviewer Assignments Rely On? Community Perspectives on Issues and Ideals in Conference Peer-Review
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.354/
Thorn Jakobsen, Terne and Rogers, Anna
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4810--4823
Both scientific progress and individual researcher careers depend on the quality of peer review, which in turn depends on paper-reviewer matching. Surprisingly, this problem has been mostly approached as an automated recommendation problem rather than as a matter where different stakeholders (area chairs, reviewers, authors) have accumulated experience worth taking into account. We present the results of the first survey of the NLP community, identifying common issues and perspectives on what factors should be considered by paper-reviewer matching systems. This study contributes actionable recommendations for improving future NLP conferences, and desiderata for interpretable peer review assignments.
null
null
10.18653/v1/2022.naacl-main.354
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,053
inproceedings
campolungo-etal-2022-reducing
Reducing Disambiguation Biases in {NMT} by Leveraging Explicit Word Sense Information
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.355/
Campolungo, Niccol{\`o} and Pasini, Tommaso and Emelin, Denis and Navigli, Roberto
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4824--4838
Recent studies have shed some light on a common pitfall of Neural Machine Translation (NMT) models, stemming from their struggle to disambiguate polysemous words without lapsing into their most frequently occurring senses in the training corpus. In this paper, we first provide a novel approach for automatically creating high-precision sense-annotated parallel corpora, and then put forward a specifically tailored fine-tuning strategy for exploiting these sense annotations during training without introducing any additional requirement at inference time. The use of explicit senses proved to be beneficial to reduce the disambiguation bias of a baseline NMT model, while, at the same time, leading our system to attain higher BLEU scores than its vanilla counterpart in 3 language pairs.
null
null
10.18653/v1/2022.naacl-main.355
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,054
inproceedings
si-etal-2022-mining
Mining Clues from Incomplete Utterance: A Query-enhanced Network for Incomplete Utterance Rewriting
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.356/
Si, Shuzheng and Zeng, Shuang and Chang, Baobao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4839--4847
Incomplete utterance rewriting has recently raised wide attention. However, previous works do not consider the semantic structural information between incomplete utterance and rewritten utterance or model the semantic structure implicitly and insufficiently. To address this problem, we propose a QUEry-Enhanced Network(QUEEN) to solve this problem. Firstly, our proposed query template explicitly brings guided semantic structural knowledge between the incomplete utterance and the rewritten utterance making model perceive where to refer back to or recover omitted tokens. Then, we adopt a fast and effective edit operation scoring network to model the relation between two tokens. Benefiting from extra information and the well-designed network, QUEEN achieves state-of-the-art performance on several public datasets.
null
null
10.18653/v1/2022.naacl-main.356
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,055
inproceedings
zhao-etal-2022-domain
Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.357/
Zhao, Lulu and Zheng, Fujia and Zeng, Weihao and He, Keqing and Xu, Weiran and Jiang, Huixing and Wu, Wei and Wu, Yanan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4848--4862
The most advanced abstractive dialogue summarizers lack generalization ability on new domains and the existing researches for domain adaptation in summarization generally rely on large-scale pre-trainings. To explore the lightweight fine-tuning methods for domain adaptation of dialogue summarization, in this paper, we propose an efficient and generalizable Domain-Oriented Prefix-tuning model, which utilizes a domain word initialized prefix module to alleviate domain entanglement and adopts discrete prompts to guide the model to focus on key contents of dialogues and enhance model generalization. We conduct zero-shot experiments and build domain adaptation benchmarks on two multi-domain dialogue summarization datasets, TODSum and QMSum. Adequate experiments and qualitative analysis prove the effectiveness of our methods.
null
null
10.18653/v1/2022.naacl-main.357
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,056
inproceedings
rubavicius-lascarides-2022-interactive
Interactive Symbol Grounding with Complex Referential Expressions
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.358/
Rubavicius, Rimvydas and Lascarides, Alex
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4863--4874
We present a procedure for learning to ground symbols from a sequence of stimuli consisting of an arbitrarily complex noun phrase (e.g. {\textquotedblleft}all but one green square above both red circles.{\textquotedblright}) and its designation in the visual scene. Our distinctive approach combines: a) lazy few-shot learning to relate open-class words like green and above to their visual percepts; and b) symbolic reasoning with closed-class word categories like quantifiers and negation. We use this combination to estimate new training examples for grounding symbols that occur \textit{within} a noun phrase but aren`t designated by that noun phase (e.g, red in the above example), thereby potentially gaining data efficiency. We evaluate the approach in a visual reference resolution task, in which the learner starts out unaware of concepts that are part of the domain model and how they relate to visual percepts.
null
null
10.18653/v1/2022.naacl-main.358
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,057
inproceedings
cui-etal-2022-generalized-quantifiers
Generalized Quantifiers as a Source of Error in Multilingual {NLU} Benchmarks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.359/
Cui, Ruixiang and Hershcovich, Daniel and S{\o}gaard, Anders
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4875--4893
Logical approaches to representing language have developed and evaluated computational models of quantifier words since the 19th century, but today`s NLU models still struggle to capture their semantics. We rely on Generalized Quantifier Theory for language-independent representations of the semantics of quantifier words, to quantify their contribution to the errors of NLU models. We find that quantifiers are pervasive in NLU benchmarks, and their occurrence at test time is associated with performance drops. Multilingual models also exhibit unsatisfying quantifier reasoning abilities, but not necessarily worse for non-English languages. To facilitate directly-targeted probing, we present an adversarial generalized quantifier NLI task (GQNLI) and show that pre-trained language models have a clear lack of robustness in generalized quantifier reasoning.
null
null
10.18653/v1/2022.naacl-main.359
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,058
inproceedings
zmigrod-etal-2022-exact
Exact Paired-Permutation Testing for Structured Test Statistics
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.360/
Zmigrod, Ran and Vieira, Tim and Cotterell, Ryan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4894--4902
Significance testing{---}especially the paired-permutation test{---}has played a vital role in developing NLP systems to provide confidence that the difference in performance between two systems (i.e., the test statistic) is not due to luck. However, practitioners rely on Monte Carlo approximation to perform this test due to a lack of a suitable exact algorithm. In this paper, we provide an efficient exact algorithm for the paired-permutation test for a family of structured test statistics. Our algorithm runs in $\mathcal{O}(G N (\log GN )(\log N))$ time where $N$ is the dataset size and $G$ is the range of the test statistic. We found that our exact algorithm was 10x faster than the Monte Carlo approximation with 20000 samples on a common dataset
null
null
10.18653/v1/2022.naacl-main.360
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,059
inproceedings
malkin-etal-2022-balanced
A Balanced Data Approach for Evaluating Cross-Lingual Transfer: Mapping the Linguistic Blood Bank
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.361/
Malkin, Dan and Limisiewicz, Tomasz and Stanovsky, Gabriel
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4903--4915
We show that the choice of pretraining languages affects downstream cross-lingual transfer for BERT-based models. We inspect zero-shot performance in balanced data conditions to mitigate data size confounds, classifying pretraining languages that improve downstream performance as donors, and languages that are improved in zero-shot performance as recipients. We develop a method of quadratic time complexity in the number of languages to estimate these relations, instead of an exponential exhaustive computation of all possible combinations. We find that our method is effective on a diverse set of languages spanning different linguistic features and two downstream tasks. Our findings can inform developers of large-scale multilingual language models in choosing better pretraining configurations.
null
null
10.18653/v1/2022.naacl-main.361
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,060
inproceedings
zhang-etal-2022-ssegcn
{SSEGCN}: Syntactic and Semantic Enhanced Graph Convolutional Network for Aspect-based Sentiment Analysis
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.362/
Zhang, Zheng and Zhou, Zili and Wang, Yanna
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4916--4925
Aspect-based Sentiment Analysis (ABSA) aims to predict the sentiment polarity towards a particular aspect in a sentence. Recently, graph neural networks based on dependency tree convey rich structural information which is proven to be utility for ABSA. However, how to effectively harness the semantic and syntactic structure information from the dependency tree remains a challenging research question. In this paper, we propose a novel Syntactic and Semantic Enhanced Graph Convolutional Network (SSEGCN) model for ABSA task. Specifically, we propose an aspect-aware attention mechanism combined with self-attention to obtain attention score matrices of a sentence, which can not only learn the aspect-related semantic correlations, but also learn the global semantics of the sentence. In order to obtain comprehensive syntactic structure information, we construct syntactic mask matrices of the sentence according to the different syntactic distances between words. Furthermore, to combine syntactic structure and semantic information, we equip the attention score matrices by syntactic mask matrices. Finally, we enhance the node representations with graph convolutional network over attention score matrices for ABSA. Experimental results on benchmark datasets illustrate that our proposed model outperforms state-of-the-art methods.
null
null
10.18653/v1/2022.naacl-main.362
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,061
inproceedings
lahnala-etal-2022-mitigating
Mitigating Toxic Degeneration with Empathetic Data: Exploring the Relationship Between Toxicity and Empathy
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.363/
Lahnala, Allison and Welch, Charles and Neuendorf, B{\'e}la and Flek, Lucie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4926--4938
Large pre-trained neural language models have supported the effectiveness of many NLP tasks, yet are still prone to generating toxic language hindering the safety of their use. Using empathetic data, we improve over recent work on controllable text generation that aims to reduce the toxicity of generated text. We find we are able to dramatically reduce the size of fine-tuning data to 7.5-30k samples while at the same time making significant improvements over state-of-the-art toxicity mitigation of up to 3.4{\%} absolute reduction (26{\%} relative) from the original work on 2.3m samples, by strategically sampling data based on empathy scores. We observe that the degree of improvements is subject to specific communication components of empathy. In particular, the more cognitive components of empathy significantly beat the original dataset in almost all experiments, while emotional empathy was tied to less improvement and even underperforming random samples of the original data. This is a particularly implicative insight for NLP work concerning empathy as until recently the research and resources built for it have exclusively considered empathy as an emotional concept.
null
null
10.18653/v1/2022.naacl-main.363
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,062
inproceedings
tian-etal-2022-duck
{DUCK}: Rumour Detection on Social Media by Modelling User and Comment Propagation Networks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.364/
Tian, Lin and Zhang, Xiuzhen and Lau, Jey Han
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4939--4949
Social media rumours, a form of misinformation, can mislead the public and cause significant economic and social disruption. Motivated by the observation that the user network {---} which captures $\textit{who}$ engage with a story {---} and the comment network {---} which captures $\textit{how}$ they react to it {---} provide complementary signals for rumour detection, in this paper, we propose DUCK (rumour $\underline{d}$etection with $\underline{u}$ser and $\underline{c}$omment networ$\underline{k}$s) for rumour detection on social media. We study how to leverage transformers and graph attention networks to jointly model the contents and structure of social media conversations, as well as the network of users who engaged in these conversations. Over four widely used benchmark rumour datasets in English and Chinese, we show that DUCK produces superior performance for detecting rumours, creating a new state-of-the-art. Source code for DUCK is available at: \url{https://github.com/ltian678/DUCK-code}.
null
null
10.18653/v1/2022.naacl-main.364
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,063
inproceedings
stahlberg-kumar-2022-jam
Jam or Cream First? Modeling Ambiguity in Neural Machine Translation with {SCONES}
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.365/
Stahlberg, Felix and Kumar, Shankar
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4950--4961
The softmax layer in neural machine translation is designed to model the distribution over mutually exclusive tokens. Machine translation, however, is intrinsically uncertain: the same source sentence can have multiple semantically equivalent translations. Therefore, we propose to replace the softmax activation with a multi-label classification layer that can model ambiguity more effectively. We call our loss function Single-label Contrastive Objective for Non-Exclusive Sequences (SCONES). We show that the multi-label output layer can still be trained on single reference training data using the SCONES loss function. SCONES yields consistent BLEU score gains across six translation directions, particularly for medium-resource language pairs and small beam sizes. By using smaller beam sizes we can speed up inference by a factor of 3.9x and still match or improve the BLEU score obtained using softmax. Furthermore, we demonstrate that SCONES can be used to train NMT models that assign the highest probability to adequate translations, thus mitigating the {\textquotedblleft}beam search curse{\textquotedblright}. Additional experiments on synthetic language pairs with varying levels of uncertainty suggest that the improvements from SCONES can be attributed to better handling of ambiguity.
null
null
10.18653/v1/2022.naacl-main.365
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,064
inproceedings
zhang-etal-2022-skillspan
{S}kill{S}pan: Hard and Soft Skill Extraction from {E}nglish Job Postings
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.366/
Zhang, Mike and Jensen, Kristian and Sonniks, Sif and Plank, Barbara
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4962--4984
Skill Extraction (SE) is an important and widely-studied task useful to gain insights into labor market dynamics. However, there is a lacuna of datasets and annotation guidelines; available datasets are few and contain crowd-sourced labels on the span-level or labels from a predefined skill inventory. To address this gap, we introduce SKILLSPAN, a novel SE dataset consisting of 14.5K sentences and over 12.5K annotated spans. We release its respective guidelines created over three different sources annotated for hard and soft skills by domain experts. We introduce a BERT baseline (Devlin et al., 2019). To improve upon this baseline, we experiment with language models that are optimized for long spans (Joshi et al., 2020; Beltagy et al., 2020), continuous pre-training on the job posting domain (Han and Eisenstein, 2019; Gururangan et al., 2020), and multi-task learning (Caruana, 1997). Our results show that the domain-adapted models significantly outperform their non-adapted counterparts, and single-task outperforms multi-task learning.
null
null
10.18653/v1/2022.naacl-main.366
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,065
inproceedings
liang-etal-2022-raat
{RAAT}: Relation-Augmented Attention Transformer for Relation Modeling in Document-Level Event Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.367/
Liang, Yuan and Jiang, Zhuoxuan and Yin, Di and Ren, Bo
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4985--4997
In document-level event extraction (DEE) task, event arguments always scatter across sentences (across-sentence issue) and multipleevents may lie in one document (multi-event issue). In this paper, we argue that the relation information of event arguments is of greatsignificance for addressing the above two issues, and propose a new DEE framework which can model the relation dependencies, calledRelation-augmented Document-level Event Extraction (ReDEE). More specifically, this framework features a novel and tailored transformer,named as Relation-augmented Attention Transformer (RAAT). RAAT is scalable to capture multi-scale and multi-amount argument relations. To further leverage relation information, we introduce a separate event relation prediction task and adopt multi-task learning method to explicitly enhance event extraction performance. Extensive experiments demonstrate the effectiveness of the proposed method, which can achieve state-of-the-art performance on two public datasets. Our code is available at \url{https://github.com/TencentYoutuResearch/RAAT}.
null
null
10.18653/v1/2022.naacl-main.367
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,066
inproceedings
zheng-etal-2022-double
A Double-Graph Based Framework for Frame Semantic Parsing
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.368/
Zheng, Ce and Chen, Xudong and Xu, Runxin and Chang, Baobao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4998--5011
Frame semantic parsing is a fundamental NLP task, which consists of three subtasks: frame identification, argument identification and role classification. Most previous studies tend to neglect relations between different subtasks and arguments and pay little attention to ontological frame knowledge defined in FrameNet. In this paper, we propose a Knowledge-guided Incremental semantic parser with Double-graph (KID). We first introduce Frame Knowledge Graph (FKG), a heterogeneous graph containing both frames and FEs (Frame Elements) built on the frame knowledge so that we can derive knowledge-enhanced representations for frames and FEs. Besides, we propose Frame Semantic Graph (FSG) to represent frame semantic structures extracted from the text with graph structures. In this way, we can transform frame semantic parsing into an incremental graph construction problem to strengthen interactions between subtasks and relations between arguments. Our experiments show that KID outperforms the previous state-of-the-art method by up to 1.7 F1-score on two FrameNet datasets. Our code is availavle at \url{https://github.com/PKUnlp-icler/KID}.
null
null
10.18653/v1/2022.naacl-main.368
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,067
inproceedings
wang-etal-2022-enhanced
An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.369/
Wang, Peiyi and Xu, Runxin and Liu, Tianyu and Zhou, Qingyu and Cao, Yunbo and Chang, Baobao and Sui, Zhifang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5012--5024
Few-Shot Sequence Labeling (FSSL) is a canonical paradigm for the tagging models, e.g., named entity recognition and slot filling, to generalize on an emerging, resource-scarce domain. Recently, the metric-based meta-learning framework has been recognized as a promising approach for FSSL. However, most prior works assign a label to each token based on the token-level similarities, which ignores the integrality of named entities or slots. To this end, in this paper, we propose ESD, an Enhanced Span-based Decomposition method for FSSL. ESD formulates FSSL as a span-level matching problem between test query and supporting instances. Specifically, ESD decomposes the span matching problem into a series of span-level procedures, mainly including enhanced span representation, class prototype aggregation and span conflicts resolution. Extensive experiments show that ESD achieves the new state-of-the-art results on two popular FSSL benchmarks, FewNERD and SNIPS, and is proven to be more robust in the noisy and nested tagging scenarios.
null
null
10.18653/v1/2022.naacl-main.369
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,068
inproceedings
xu-etal-2022-two
A Two-Stream {AMR}-enhanced Model for Document-level Event Argument Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.370/
Xu, Runxin and Wang, Peiyi and Liu, Tianyu and Zeng, Shuang and Chang, Baobao and Sui, Zhifang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5025--5036
Most previous studies aim at extracting events from a single sentence, while document-level event extraction still remains under-explored. In this paper, we focus on extracting event arguments from an entire document, which mainly faces two critical problems: a) the long-distance dependency between trigger and arguments over sentences; b) the distracting context towards an event in the document. To address these issues, we propose a \textbf{T}wo-\textbf{S}tream \textbf{A}bstract meaning \textbf{R}epresentation enhanced extraction model (TSAR). TSAR encodes the document from different perspectives by a two-stream encoding module, to utilize local and global information and lower the impact of distracting context. Besides, TSAR introduces an AMR-guided interaction module to capture both intra-sentential and inter-sentential features, based on the locally and globally constructed AMR semantic graphs. An auxiliary boundary loss is introduced to enhance the boundary information for text spans explicitly. Extensive experiments illustrate that TSAR outperforms previous state-of-the-art by a large margin, with 2.54 F1 and 5.13 F1 performance gain on the public RAMS and WikiEvents datasets respectively, showing the superiority in the cross-sentence arguments extraction. We release our code in \url{https://github.com/PKUnlp-icler/TSAR}.
null
null
10.18653/v1/2022.naacl-main.370
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,069
inproceedings
wang-etal-2022-robust
Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.371/
Wang, Fei and Xu, Zhewei and Szekely, Pedro and Chen, Muhao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5037--5048
Controlled table-to-text generation seeks to generate natural language descriptions for highlighted subparts of a table. Previous SOTA systems still employ a sequence-to-sequence generation method, which merely captures the table as a linear structure and is brittle when table layouts change. We seek to go beyond this paradigm by (1) effectively expressing the relations of content pieces in the table, and (2) making our model robust to content-invariant structural transformations. Accordingly, we propose an equivariance learning framework, which encodes tables with a structure-aware self-attention mechanism. This prunes the full self-attention structure into an order-invariant graph attention that captures the connected graph structure of cells belonging to the same row or column, and it differentiates between relevant cells and irrelevant cells from the structural perspective. Our framework also modifies the positional encoding mechanism to preserve the relative position of tokens in the same cell but enforce position invariance among different cells. Our technology is free to be plugged into existing table-to-text generation models, and has improved T5-based models to offer better performance on ToTTo and HiTab. Moreover, on a harder version of ToTTo, we preserve promising performance, while previous SOTA systems, even with transformation-based data augmentation, have seen significant performance drops.
null
null
10.18653/v1/2022.naacl-main.371
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,070
inproceedings
sun-etal-2022-jointlk
{J}oint{LK}: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.372/
Sun, Yueqing and Shi, Qi and Qi, Le and Zhang, Yu
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5049--5060
Existing KG-augmented models for commonsense question answering primarily focus on designing elaborate Graph Neural Networks (GNNs) to model knowledge graphs (KGs). However, they ignore (i) the effectively fusing and reasoning over question context representations and the KG representations, and (ii) automatically selecting relevant nodes from the noisy KGs during reasoning. In this paper, we propose a novel model, JointLK, which solves the above limitations through the joint reasoning of LM and GNN and the dynamic KGs pruning mechanism. Specifically, JointLK performs joint reasoning between LM and GNN through a novel dense bidirectional attention module, in which each question token attends on KG nodes and each KG node attends on question tokens, and the two modal representations fuse and update mutually by multi-step interactions. Then, the dynamic pruning module uses the attention weights generated by joint reasoning to prune irrelevant KG nodes recursively. We evaluate JointLK on the CommonsenseQA and OpenBookQA datasets, and demonstrate its improvements to the existing LM and LM+KG models, as well as its capability to perform interpretable reasoning.
null
null
10.18653/v1/2022.naacl-main.372
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,071
inproceedings
itzhak-levy-2022-models
Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of Tokens
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.373/
Itzhak, Itay and Levy, Omer
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5061--5068
Standard pretrained language models operate on sequences of subword tokens without direct access to the characters that compose each token`s string representation. We probe the embedding layer of pretrained language models and show that models learn the internal character composition of whole word and subword tokens to a surprising extent, without ever seeing the characters coupled with the tokens. Our results show that the embedding layers of RoBERTa and GPT2 each hold enough information to accurately spell up to a third of the vocabulary and reach high character $n$gram overlap across all token types. We further test whether enriching subword models with character information can improve language modeling, and observe that this method has a near-identical learning curve as training without spelling-based enrichment. Overall, our results suggest that language modeling objectives incentivize the model to implicitly learn some notion of spelling, and that explicitly teaching the model how to spell does not appear to enhance its performance on such tasks.
null
null
10.18653/v1/2022.naacl-main.373
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,072
inproceedings
guan-etal-2022-corpus
A Corpus for Understanding and Generating Moral Stories
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.374/
Guan, Jian and Liu, Ziqi and Huang, Minlie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5069--5087
Teaching morals is one of the most important purposes of storytelling. An essential ability for understanding and writing moral stories is bridging story plots and implied morals. Its challenges mainly lie in: (1) grasping knowledge about abstract concepts in morals, (2) capturing inter-event discourse relations in stories, and (3) aligning value preferences of stories and morals concerning good or bad behavior. In this paper, we propose two understanding tasks and two generation tasks to assess these abilities of machines. We present STORAL, a new dataset of Chinese and English human-written moral stories. We show the difficulty of the proposed tasks by testing various models with automatic and manual evaluation on STORAL. Furthermore, we present a retrieval-augmented algorithm that effectively exploits related concepts or events in training sets as additional guidance to improve performance on these tasks.
null
null
10.18653/v1/2022.naacl-main.374
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,073
inproceedings
liang-etal-2022-modeling
Modeling Multi-Granularity Hierarchical Features for Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.375/
Liang, Xinnian and Wu, Shuangzhi and Li, Mu and Li, Zhoujun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5088--5098
Relation extraction is a key task in Natural Language Processing (NLP), which aims to extract relations between entity pairs from given texts. Recently, relation extraction (RE) has achieved remarkable progress with the development of deep neural networks. Most existing research focuses on constructing explicit structured features using external knowledge such as knowledge graph and dependency tree. In this paper, we propose a novel method to extract multi-granularity features based solely on the original input sentences. We show that effective structured features can be attained even without external knowledge. Three kinds of features based on the input sentences are fully exploited, which are in entity mention level, segment level, and sentence level. All the three are jointly and hierarchically modeled. We evaluate our method on three public benchmarks: SemEval 2010 Task 8, Tacred, and Tacred Revisited. To verify the effectiveness, we apply our method to different encoders such as LSTM and BERT. Experimental results show that our method significantly outperforms existing state-of-the-art models that even use external knowledge. Extensive analyses demonstrate that the performance of our model is contributed by the capture of multi-granularity features and the model of their hierarchical structure.
null
null
10.18653/v1/2022.naacl-main.375
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,074
inproceedings
ye-etal-2022-cross
Cross-modal Contrastive Learning for Speech Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.376/
Ye, Rong and Wang, Mingxuan and Li, Lei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5099--5113
How can we learn unified representations for spoken utterances and their written text? Learning similar representations for semantically similar speech and text is important for speech translation. To this end, we propose ConST, a cross-modal contrastive learning method for end-to-end speech-to-text translation. We evaluate ConST and a variety of previous baselines on a popular benchmark MuST-C. Experiments show that the proposed ConST consistently outperforms the previous methods, and achieves an average BLEU of 29.4. The analysis further verifies that ConST indeed closes the representation gap of different modalities {---} its learned representation improves the accuracy of cross-modal speech-text retrieval from 4{\%} to 88{\%}. Code and models are available at \url{https://github.com/ReneeYe/ConST}.
null
null
10.18653/v1/2022.naacl-main.376
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,075
inproceedings
han-etal-2022-meet
Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional Characters with only a Few Utterances
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.377/
Han, Seungju and Kim, Beomsu and Yoo, Jin Yong and Seo, Seokjun and Kim, Sangbum and Erdenee, Enkhbayar and Chang, Buru
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5114--5132
In this paper, we consider mimicking fictional characters as a promising direction for building engaging conversation models. To this end, we present a new practical task where only a few utterances of each fictional character are available to generate responses mimicking them. Furthermore, we propose a new method named Pseudo Dialog Prompting (PDP) that generates responses by leveraging the power of large-scale language models with prompts containing the target character`s utterances. To better reflect the style of the character, PDP builds the prompts in the form of dialog that includes the character`s utterances as dialog history. Since only utterances of the characters are available in the proposed task, PDP matches each utterance with an appropriate pseudo-context from a predefined set of context candidates using a retrieval model. Through human and automatic evaluation, we show that PDP generates responses that better reflect the style of fictional characters than baseline methods.
null
null
10.18653/v1/2022.naacl-main.377
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,076
inproceedings
maheshwari-etal-2022-dynamictoc
{D}ynamic{TOC}: Persona-based Table of Contents for Consumption of Long Documents
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.378/
Maheshwari, Himanshu and Sivakumar, Nethraa and Jain, Shelly and Karandikar, Tanvi and Aggarwal, Vinay and Goyal, Navita and Shekhar, Sumit
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5133--5143
Long documents like contracts, financial documents, etc., are often tedious to read through. Linearly consuming (via scrolling or navigation through default table of content) these documents is time-consuming and challenging. These documents are also authored to be consumed by varied entities (referred to as persona in the paper) interested in only certain parts of the document. In this work, we describe DynamicToC, a dynamic table of content-based navigator, to aid in the task of non-linear, persona-based document consumption. DynamicToC highlights sections of interest in the document as per the aspects relevant to different personas. DynamicToC is augmented with short questions to assist the users in understanding underlying content. This uses a novel deep-reinforcement learning technique to generate questions on these persona-clustered paragraphs. Human and automatic evaluations suggest the efficacy of both end-to-end pipeline and different components of DynamicToC.
null
null
10.18653/v1/2022.naacl-main.378
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,077
inproceedings
kang-etal-2022-kala
{KALA:} Knowledge-Augmented Language Model Adaptation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.379/
Kang, Minki and Baek, Jinheon and Hwang, Sung Ju
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5144--5167
Pre-trained language models (PLMs) have achieved remarkable success on various natural language understanding tasks. Simple fine-tuning of PLMs, on the other hand, might be suboptimal for domain-specific tasks because they cannot possibly cover knowledge from all domains. While adaptive pre-training of PLMs can help them obtain domain-specific knowledge, it requires a large training cost. Moreover, adaptive pre-training can harm the PLM`s performance on the downstream task by causing catastrophic forgetting of its general knowledge. To overcome such limitations of adaptive pre-training for PLM adaption, we propose a novel domain adaption framework for PLMs coined as Knowledge-Augmented Language model Adaptation (KALA), which modulates the intermediate hidden representations of PLMs with domain knowledge, consisting of entities and their relational facts. We validate the performance of our KALA on question answering and named entity recognition tasks on multiple datasets across various domains. The results show that, despite being computationally efficient, our KALA largely outperforms adaptive pre-training.
null
null
10.18653/v1/2022.naacl-main.379
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,078
inproceedings
shin-etal-2022-effect
On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.380/
Shin, Seongjin and Lee, Sang-Woo and Ahn, Hwijeen and Kim, Sungdong and Kim, HyoungSeok and Kim, Boseop and Cho, Kyunghyun and Lee, Gichang and Park, Woomyoung and Ha, Jung-Woo and Sung, Nako
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5168--5186
Many recent studies on large-scale language models have reported successful in-context zero- and few-shot learning ability. However, the in-depth analysis of when in-context learning occurs is still lacking. For example, it is unknown how in-context learning performance changes as the training corpus varies. Here, we investigate the effects of the source and size of the pretraining corpus on in-context learning in HyperCLOVA, a Korean-centric GPT-3 model. From our in-depth investigation, we introduce the following observations: (1) in-context learning performance heavily depends on the corpus domain source, and the size of the pretraining corpus does not necessarily determine the emergence of in-context learning, (2) in-context learning ability can emerge when a language model is trained on a combination of multiple corpora, even when each corpus does not result in in-context learning on its own, (3) pretraining with a corpus related to a downstream task does not always guarantee the competitive in-context learning performance of the downstream task, especially in the few-shot setting, and (4) the relationship between language modeling (measured in perplexity) and in-context learning does not always correlate: e.g., low perplexity does not always imply high in-context few-shot learning performance.
null
null
10.18653/v1/2022.naacl-main.380
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,079
inproceedings
chen-etal-2022-sketching
Sketching as a Tool for Understanding and Accelerating Self-attention for Long Sequences
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.381/
Chen, Yifan and Zeng, Qi and Hakkani-Tur, Dilek and Jin, Di and Ji, Heng and Yang, Yun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5187--5199
Transformer-based models are not efficient in processing long sequences due to the quadratic space and time complexity of the self-attention modules. To address this limitation, Linformer and Informer reduce the quadratic complexity to linear (modulo logarithmic factors) via low-dimensional projection and row selection, respectively. These two models are intrinsically connected, and to understand their connection we introduce a theoretical framework of matrix sketching. Based on the theoretical analysis, we propose Skeinformer to accelerate self-attention and further improve the accuracy of matrix approximation to self-attention with column sampling, adaptive row normalization and pilot sampling reutilization. Experiments on the Long Range Arena benchmark demonstrate that our methods outperform alternatives with a consistently smaller time/space footprint.
null
null
10.18653/v1/2022.naacl-main.381
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,080
inproceedings
lu-etal-2022-partner
Partner Personas Generation for Dialogue Response Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.382/
Lu, Hongyuan and Lam, Wai and Cheng, Hong and Meng, Helen
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5200--5212
Incorporating personas information allows diverse and engaging responses in dialogue response generation. Unfortunately, prior works have primarily focused on self personas and have overlooked the value of partner personas. Moreover, in practical applications, the availability of the gold partner personas is often not the case. This paper attempts to tackle these issues by offering a novel framework that leverages automatic partner personas generation to enhance the succeeding dialogue response generation. Our framework employs reinforcement learning with a dedicatedly designed critic network for reward judgement. Experimental results from automatic and human evaluations indicate that our framework is capable of generating relevant, interesting, coherent and informative partner personas, even compared to the ground truth partner personas. This enhances the succeeding dialogue response generation, which surpasses our competitive baselines that condition on the ground truth partner personas.
null
null
10.18653/v1/2022.naacl-main.382
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,081
inproceedings
sun-etal-2022-semantically
Semantically Informed Slang Interpretation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.383/
Sun, Zhewei and Zemel, Richard and Xu, Yang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5213--5231
Slang is a predominant form of informal language making flexible and extended use of words that is notoriously hard for natural language processing systems to interpret. Existing approaches to slang interpretation tend to rely on context but ignore semantic extensions common in slang word usage. We propose a semantically informed slang interpretation (SSI) framework that considers jointly the contextual and semantic appropriateness of a candidate interpretation for a query slang. We perform rigorous evaluation on two large-scale online slang dictionaries and show that our approach not only achieves state-of-the-art accuracy for slang interpretation in English, but also does so in zero-shot and few-shot scenarios where training data is sparse. Furthermore, we show how the same framework can be applied to enhancing machine translation of slang from English to other languages. Our work creates opportunities for the automated interpretation and translation of informal language.
null
null
10.18653/v1/2022.naacl-main.383
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,082
inproceedings
hu-etal-2022-dual
Dual-Channel Evidence Fusion for Fact Verification over Texts and Tables
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.384/
Hu, Nan and Wu, Zirui and Lai, Yuxuan and Liu, Xiao and Feng, Yansong
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5232--5242
Different from previous fact extraction and verification tasks that only consider evidence of a single format, FEVEROUS brings further challenges by extending the evidence format to both plain text and tables. Existing works convert all candidate evidence into either sentences or tables, thus often failing to fully capture the rich context in their original format from the converted evidence, let alone the context information lost during conversion. In this paper, we propose a Dual Channel Unified Format fact verification model (DCUF), which unifies various evidence into parallel streams, i.e., natural language sentences and a global evidence table, simultaneously. With carefully-designed evidence conversion and organization methods, DCUF makes the most of pre-trained table/language models to encourage each evidence piece to perform early and thorough interactions with other pieces in its original format. Experiments show that our model can make better use of existing pre-trained models to absorb evidence of two formats, thus outperforming previous works by a large margin. Our code and models are publicly available.
null
null
10.18653/v1/2022.naacl-main.384
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,083
inproceedings
zhang-etal-2022-treemix
{T}ree{M}ix: Compositional Constituency-based Data Augmentation for Natural Language Understanding
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.385/
Zhang, Le and Yang, Zichao and Yang, Diyi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5243--5258
Data augmentation is an effective approach to tackle over-fitting. Many previous works have proposed different data augmentations strategies for NLP, such as noise injection, word replacement, back-translation etc. Though effective, they missed one important characteristic of language{--}compositionality, meaning of a complex expression is built from its sub-parts. Motivated by this, we propose a compositional data augmentation approach for natural language understanding called TreeMix. Specifically, TreeMix leverages constituency parsing tree to decompose sentences into constituent sub-structures and the Mixup data augmentation technique to recombine them to generate new sentences. Compared with previous approaches, TreeMix introduces greater diversity to the samples generated and encourages models to learn compositionality of NLP data. Extensive experiments on text classification and SCAN demonstrate that TreeMix outperforms current state-of-the-art data augmentation methods.
null
null
10.18653/v1/2022.naacl-main.385
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,084
inproceedings
harvill-etal-2022-syn2vec
{S}yn2{V}ec: Synset Colexification Graphs for Lexical Semantic Similarity
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.386/
Harvill, John and Girju, Roxana and Hasegawa-Johnson, Mark
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5259--5270
In this paper we focus on patterns of colexification (co-expressions of form-meaning mapping in the lexicon) as an aspect of lexical-semantic organization, and use them to build large scale synset graphs across BabelNet`s typologically diverse set of 499 world languages. We introduce and compare several approaches: monolingual and cross-lingual colexification graphs, popular distributional models, and fusion approaches. The models are evaluated against human judgments on a semantic similarity task for nine languages. Our strong empirical findings also point to the importance of universality of our graph synset embedding representations with no need for any language-specific adaptation when evaluated on the lexical similarity task. The insights of our exploratory investigation of large-scale colexification graphs could inspire significant advances in NLP across languages, especially for tasks involving languages which lack dedicated lexical resources, and can benefit from language transfer from large shared cross-lingual semantic spaces.
null
null
10.18653/v1/2022.naacl-main.386
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,085
inproceedings
dziri-etal-2022-origin
On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.387/
Dziri, Nouha and Milton, Sivan and Yu, Mo and Zaiane, Osmar and Reddy, Siva
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5271--5285
Knowledge-grounded conversational models are known to suffer from producing factually invalid statements, a phenomenon commonly called hallucination. In this work, we investigate the underlying causes of this phenomenon: is hallucination due to the training data, or to the models? We conduct a comprehensive human study on both existing knowledge-grounded conversational benchmarks and several state-of-the-art models. Our study reveals that the standard benchmarks consist of {\ensuremath{>}} 60{\%} hallucinated responses, leading to models that not only hallucinate but even amplify hallucinations. Our findings raise important questions on the quality of existing datasets and models trained using them. We make our annotations publicly available for future research.
null
null
10.18653/v1/2022.naacl-main.387
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,086
inproceedings
lyu-etal-2022-favorite
Is {\textquotedblleft}My Favorite New Movie{\textquotedblright} My Favorite Movie? Probing the Understanding of Recursive Noun Phrases
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.388/
Lyu, Qing and Hua, Zheng and Li, Daoxin and Zhang, Li and Apidianaki, Marianna and Callison-Burch, Chris
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5286--5302
Recursive noun phrases (NPs) have interesting semantic properties. For example, {\textquotedblleft}my favorite new movie{\textquotedblright} is not necessarily my favorite movie, whereas {\textquotedblleft}my new favorite movie{\textquotedblright} is. This is common sense to humans, yet it is unknown whether language models have such knowledge. We introduce the Recursive Noun Phrase Challenge (RNPC), a dataset of three textual inference tasks involving textual entailment and event plausibility comparison, precisely targeting the understanding of recursive NPs. When evaluated on RNPC, state-of-the-art Transformer models only perform around chance. Still, we show that such knowledge is learnable with appropriate data. We further probe the models for relevant linguistic features that can be learned from our tasks, including modifier semantic category and modifier scope. Finally, models trained on RNPC achieve strong zero-shot performance on an extrinsic Harm Detection evaluation task, showing the usefulness of the understanding of recursive NPs in downstream applications.
null
null
10.18653/v1/2022.naacl-main.388
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,087
inproceedings
ni-etal-2022-original
Original or Translated? A Causal Analysis of the Impact of Translationese on Machine Translation Performance
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.389/
Ni, Jingwei and Jin, Zhijing and Freitag, Markus and Sachan, Mrinmaya and Sch{\"olkopf, Bernhard
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5303--5320
Human-translated text displays distinct features from naturally written text in the same language. This phenomena, known as translationese, has been argued to confound the machine translation (MT) evaluation. Yet, we find that existing work on translationese neglects some important factors and the conclusions are mostly correlational but not causal. In this work, we collect CausalMT, a dataset where the MT training data are also labeled with the human translation directions. We inspect two critical factors, the train-test direction match (whether the human translation directions in the training and test sets are aligned), and data-model direction match (whether the model learns in the same direction as the human translation direction in the dataset). We show that these two factors have a large causal effect on the MT performance, in addition to the test-model direction mismatch highlighted by existing work on the impact of translationese. In light of our findings, we provide a set of suggestions for MT training and evaluation. Our code and data are at \url{https://github.com/EdisonNi-hku/CausalMT}
null
null
10.18653/v1/2022.naacl-main.389
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,088
inproceedings
zhang-etal-2022-visual
Visual Commonsense in Pretrained Unimodal and Multimodal Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.390/
Zhang, Chenyu and Van Durme, Benjamin and Li, Zhuowan and Stengel-Eskin, Elias
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5321--5335
Our commonsense knowledge about objects includes their typical visual attributes; we know that bananas are typically yellow or green, and not purple. Text and image corpora, being subject to reporting bias, represent this world-knowledge to varying degrees of faithfulness. In this paper, we investigate to what degree unimodal (language-only) and multimodal (image and language) models capture a broad range of visually salient attributes. To that end, we create the Visual Commonsense Tests (ViComTe) dataset covering 5 property types (color, shape, material, size, and visual co-occurrence) for over 5000 subjects. We validate this dataset by showing that our grounded color data correlates much better than ungrounded text-only data with crowdsourced color judgments provided by Paik et al. (2021). We then use our dataset to evaluate pretrained unimodal models and multimodal models. Our results indicate that multimodal models better reconstruct attribute distributions, but are still subject to reporting bias. Moreover, increasing model size does not enhance performance, suggesting that the key to visual commonsense lies in the data.
null
null
10.18653/v1/2022.naacl-main.390
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,089
inproceedings
pang-etal-2022-quality
{Q}u{ALITY}: Question Answering with Long Input Texts, Yes!
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.391/
Pang, Richard Yuanzhe and Parrish, Alicia and Joshi, Nitish and Nangia, Nikita and Phang, Jason and Chen, Angelica and Padmakumar, Vishakh and Ma, Johnny and Thompson, Jana and He, He and Bowman, Samuel
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5336--5358
To enable building and testing models on long-document comprehension, we introduce QuALITY, a multiple-choice QA dataset with context passages in English that have an average length of about 5,000 tokens, much longer than typical current models can process. Unlike in prior work with passages, our questions are written and validated by contributors who have read the entire passage, rather than relying on summaries or excerpts. In addition, only half of the questions are answerable by annotators working under tight time constraints, indicating that skimming and simple search are not enough to consistently perform well. Our baseline models perform poorly on this task (55.4{\%}) and significantly lag behind human performance (93.5{\%}).
null
null
10.18653/v1/2022.naacl-main.391
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,090
inproceedings
zhou-etal-2022-exsum
{ExSum}: {F}rom Local Explanations to Model Understanding
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.392/
Zhou, Yilun and Ribeiro, Marco Tulio and Shah, Julie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5359--5378
Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment. Fulfilling this goal requires both that the explanations generated by these methods are correct and that people can easily and reliably understand them. While the former has been addressed in prior work, the latter is often overlooked, resulting in informal model understanding derived from a handful of local explanations. In this paper, we introduce explanation summary (ExSum), a mathematical framework for quantifying model understanding, and propose metrics for its quality assessment. On two domains, ExSum highlights various limitations in the current practice, helps develop accurate model understanding, and reveals easily overlooked properties of the model. We also connect understandability to other properties of explanations such as human alignment, robustness, and counterfactual similarity and plausibility.
null
null
10.18653/v1/2022.naacl-main.392
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,091
inproceedings
lee-etal-2022-maximum
Maximum {B}ayes {S}match Ensemble Distillation for {AMR} Parsing
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.393/
Lee, Young-Suk and Astudillo, Ram{\'o}n and Thanh Lam, Hoang and Naseem, Tahira and Florian, Radu and Roukos, Salim
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5379--5392
AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning. Self-learning techniques have also played a role in pushing performance forward. However, for most recent high performant parsers, the effect of self-learning and silver data augmentation seems to be fading. In this paper we propose to overcome this diminishing returns of silver data by combining Smatch-based ensembling techniques with ensemble distillation. In an extensive experimental setup, we push single model English parser performance to a new state-of-the-art, 85.9 (AMR2.0) and 84.3 (AMR3.0), and return to substantial gains from silver data augmentation. We also attain a new state-of-the-art for cross-lingual AMR parsing for Chinese, German, Italian and Spanish. Finally we explore the impact of the proposed technique on domain adaptation, and show that it can produce gains rivaling those of human annotated data for QALD-9 and achieve a new state-of-the-art for BioAMR.
null
null
10.18653/v1/2022.naacl-main.393
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,092
inproceedings
tucker-etal-2022-syntax
When Does Syntax Mediate Neural Language Model Performance? Evidence from Dropout Probes
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.394/
Tucker, Mycal and Eisape, Tiwalayo and Qian, Peng and Levy, Roger and Shah, Julie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5393--5408
Recent causal probing literature reveals when language models and syntactic probes use similar representations. Such techniques may yield {\textquotedblleft}false negative{\textquotedblright} causality results: models may use representations of syntax, but probes may have learned to use redundant encodings of the same syntactic information. We demonstrate that models do encode syntactic information redundantly and introduce a new probe design that guides probes to consider all syntactic information present in embeddings. Using these probes, we find evidence for the use of syntax in models where prior methods did not, allowing us to boost model performance by injecting syntactic information into representations.
null
null
10.18653/v1/2022.naacl-main.394
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,093
inproceedings
xu-choi-2022-modeling
Modeling Task Interactions in Document-Level Joint Entity and Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.395/
Xu, Liyan and Choi, Jinho
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5409--5416
We target on the document-level relation extraction in an end-to-end setting, where the model needs to jointly perform mention extraction, coreference resolution (COREF) and relation extraction (RE) at once, and gets evaluated in an entity-centric way. Especially, we address the two-way interaction between COREF and RE that has not been the focus by previous work, and propose to introduce explicit interaction namely Graph Compatibility (GC) that is specifically designed to leverage task characteristics, bridging decisions of two tasks for direct task interference. Our experiments are conducted on DocRED and DWIE; in addition to GC, we implement and compare different multi-task settings commonly adopted in previous work, including pipeline, shared encoders, graph propagation, to examine the effectiveness of different interactions. The result shows that GC achieves the best performance by up to 2.3/5.1 F1 improvement over the baseline.
null
null
10.18653/v1/2022.naacl-main.395
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,094
inproceedings
shin-van-durme-2022-shot
Few-Shot Semantic Parsing with Language Models Trained on Code
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.396/
Shin, Richard and Van Durme, Benjamin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5417--5425
Large language models can perform semantic parsing with little training data, when prompted with in-context examples. It has been shown that this can be improved by formulating the problem as paraphrasing into canonical utterances, which casts the underlying meaning representation into a controlled natural language-like representation. Intuitively, such models can more easily output canonical utterances as they are closer to the natural language used for pre-training. Recently, models also pre-trained on code, like OpenAI Codex, have risen in prominence. For semantic parsing tasks where we map natural language into code, such models may prove more adept at it. In this paper, we test this hypothesis and find that Codex performs better on such tasks than equivalent GPT-3 models. We evaluate on Overnight and SMCalFlow and find that unlike GPT-3, Codex performs similarly when targeting meaning representations directly, perhaps because meaning representations are structured similar to code in these datasets.
null
null
10.18653/v1/2022.naacl-main.396
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,095
inproceedings
li-etal-2022-corwa
{CORWA}: A Citation-Oriented Related Work Annotation Dataset
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.397/
Li, Xiangci and Mandal, Biswadip and Ouyang, Jessica
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5426--5440
Academic research is an exploratory activity to discover new solutions to problems. By this nature, academic research works perform literature reviews to distinguish their novelties from prior work. In natural language processing, this literature review is usually conducted under the {\textquotedblleft}Related Work{\textquotedblright} section. The task of related work generation aims to automatically generate the related work section given the rest of the research paper and a list of papers to cite. Prior work on this task has focused on the sentence as the basic unit of generation, neglecting the fact that related work sections consist of variable length text fragments derived from different information sources. As a first step toward a linguistically-motivated related work generation framework, we present a Citation Oriented Related Work Annotation (CORWA) dataset that labels different types of citation text fragments from different information sources. We train a strong baseline model that automatically tags the CORWA labels on massive unlabeled related work section texts. We further suggest a novel framework for human-in-the-loop, iterative, abstractive related work generation.
null
null
10.18653/v1/2022.naacl-main.397
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,096
inproceedings
li-etal-2022-overcoming
Overcoming Catastrophic Forgetting During Domain Adaptation of Seq2seq Language Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.398/
Li, Dingcheng and Chen, Zheng and Cho, Eunah and Hao, Jie and Liu, Xiaohu and Xing, Fan and Guo, Chenlei and Liu, Yang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5441--5454
Seq2seq language generation models that are trained offline with multiple domains in a sequential fashion often suffer from catastrophic forgetting. Lifelong learning has been proposed to handle this problem. However, existing work such as experience replay or elastic weighted consolidation requires incremental memory space. In this work, we propose an innovative framework, RMR{\_}DSEthat leverages a recall optimization mechanism to selectively memorize important parameters of previous tasks via regularization, and uses a domain drift estimation algorithm to compensate the drift between different do-mains in the embedding space. These designs enable the model to be trained on the current task while keep-ing the memory of previous tasks, and avoid much additional data storage. Furthermore, RMR{\_}DSE can be combined with existing lifelong learning approaches. Our experiments on two seq2seq language generation tasks, paraphrase and dialog response generation, show thatRMR{\_}DSE outperforms SOTA models by a considerable margin and reduces forgetting greatly.
null
null
10.18653/v1/2022.naacl-main.398
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,097
inproceedings
xiong-etal-2022-extreme
Extreme {Z}ero-{S}hot Learning for Extreme Text Classification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.399/
Xiong, Yuanhao and Chang, Wei-Cheng and Hsieh, Cho-Jui and Yu, Hsiang-Fu and Dhillon, Inderjit
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5455--5468
The eXtreme Multi-label text Classification (XMC) problem concerns finding most relevant labels for an input text instance from a large label set. However, the XMC setup faces two challenges: (1) it is not generalizable to predict unseen labels in dynamic environments, and (2) it requires a large amount of supervised (instance, label) pairs, which can be difficult to obtain for emerging domains. In this paper, we consider a more practical scenario called Extreme Zero-Shot XMC (EZ-XMC), in which no supervision is needed and merely raw text of instances and labels are accessible. Few-Shot XMC (FS-XMC), an extension to EZ-XMC with limited supervision is also investigated. To learn the semantic embeddings of instances and labels with raw text, we propose to pre-train Transformer-based encoders with self-supervised contrastive losses. Specifically, we develop a pre-training method $\textbf{MACLR}$, which thoroughly leverages the raw text with techniques including $\textbf{M}$ulti-scale $\textbf{A}$daptive $\textbf{C}$lustering, $\textbf{L}$abel $\textbf{R}$egularization, and self-training with pseudo positive pairs. Experimental results on four public EZ-XMC datasets demonstrate that MACLR achieves superior performance compared to all other leading baseline methods, in particular with approximately 5-10{\%} improvement in precision and recall on average. Moreover, we show that our pre-trained encoder can be further improved on FS-XMC when there are a limited number of ground-truth positive pairs in training. Our code is available at \url{https://github.com/amzn/pecos/tree/mainline/examples/MACLR}.
null
null
10.18653/v1/2022.naacl-main.399
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,098
inproceedings
hu-etal-2022-conflibert
{C}onfli{BERT}: A Pre-trained Language Model for Political Conflict and Violence
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.400/
Hu, Yibo and Hosseini, MohammadSaleh and Skorupa Parolin, Erick and Osorio, Javier and Khan, Latifur and Brandt, Patrick and D{'}Orazio, Vito
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5469--5482
Analyzing conflicts and political violence around the world is a persistent challenge in the political science and policy communities due in large part to the vast volumes of specialized text needed to monitor conflict and violence on a global scale. To help advance research in political science, we introduce ConfliBERT, a domain-specific pre-trained language model for conflict and political violence. We first gather a large domain-specific text corpus for language modeling from various sources. We then build ConfliBERT using two approaches: pre-training from scratch and continual pre-training. To evaluate ConfliBERT, we collect 12 datasets and implement 18 tasks to assess the models' practical application in conflict research. Finally, we evaluate several versions of ConfliBERT in multiple experiments. Results consistently show that ConfliBERT outperforms BERT when analyzing political violence and conflict.
null
null
10.18653/v1/2022.naacl-main.400
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,099
inproceedings
wang-etal-2022-automatic
Automatic Multi-Label Prompting: Simple and Interpretable Few-Shot Classification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.401/
Wang, Han and Xu, Canwen and McAuley, Julian
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5483--5492
Prompt-based learning (i.e., prompting) is an emerging paradigm for exploiting knowledge learned by a pretrained language model. In this paper, we propose Automatic Multi-Label Prompting (AMuLaP), a simple yet effective method to automatically select label mappings for few-shot text classification with prompting. Our method exploits one-to-many label mappings and a statistics-based algorithm to select label mappings given a prompt template. Our experiments demonstrate that AMuLaP achieves competitive performance on the GLUE benchmark without human effort or external resources.
null
null
10.18653/v1/2022.naacl-main.401
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,100
inproceedings
logeswaran-etal-2022-shot
Few-shot Subgoal Planning with Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.402/
Logeswaran, Lajanugen and Fu, Yao and Lee, Moontae and Lee, Honglak
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5493--5506
Pre-trained language models have shown successful progress in many text understanding benchmarks. This work explores the capability of these models to predict actionable plans in real-world environments. Given a text instruction, we show that language priors encoded in pre-trained models allow us to infer fine-grained subgoal sequences. In contrast to recent methods which make strong assumptions about subgoal supervision, our experiments show that language models can infer detailed subgoal sequences from few training sequences without any fine-tuning. We further propose a simple strategy to re-rank language model predictions based on interaction and feedback from the environment. Combined with pre-trained navigation and visual reasoning components, our approach demonstrates competitive performance on subgoal prediction and task completion in the ALFRED benchmark compared to prior methods that assume more subgoal supervision.
null
null
10.18653/v1/2022.naacl-main.402
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,101
inproceedings
wu-etal-2022-idpg
{IDPG}: An Instance-Dependent Prompt Generation Method
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.403/
Wu, Zhuofeng and Wang, Sinong and Gu, Jiatao and Hou, Rui and Dong, Yuxiao and Vydiswaran, V.G.Vinod and Ma, Hao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5507--5521
Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage. It freezes the pre-trained language model and only optimizes a few task-specific prompts. In this paper, we propose a conditional prompt generation method to generate prompts for each input instance, referred to as the Instance-Dependent Prompt Generation (IDPG). Unlike traditional prompt tuning methods that use a fixed prompt, IDPG introduces a lightweight and trainable component to generate prompts based on each input sentence. Extensive experiments on ten natural language understanding (NLU) tasks show that the proposed strategy consistently outperforms various prompt tuning baselines and is on par with other efficient transfer learning methods such as Compacter while tuning far fewer model parameters.
null
null
10.18653/v1/2022.naacl-main.403
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,102
inproceedings
jian-etal-2022-embedding
Embedding Hallucination for Few-shot Language Fine-tuning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.404/
Jian, Yiren and Gao, Chongyang and Vosoughi, Soroush
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5522--5530
Few-shot language learners adapt knowledge from a pre-trained model to recognize novel classes from a few-labeled sentences. In such settings, fine-tuning a pre-trained language model can cause severe over-fitting. In this paper, we propose an Embedding Hallucination (EmbedHalluc) method, which generates auxiliary embedding-label pairs to expand the fine-tuning dataset. The hallucinator is trained by playing an adversarial game with the discriminator, such that the hallucinated embedding is indiscriminative to the real ones in the fine-tuning dataset. By training with the extended dataset, the language learner effectively learns from the diverse hallucinated embeddings to overcome the over-fitting issue. Experiments demonstrate that our proposed method is effective in a wide range of language tasks, outperforming current fine-tuning methods. Further, we show that EmbedHalluc outperforms other methods that address this over-fitting problem, such as common data augmentation, semi-supervised pseudo-labeling, and regularization.
null
null
10.18653/v1/2022.naacl-main.404
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,103
inproceedings
sawhney-etal-2022-cryptocurrency
Cryptocurrency Bubble Detection: A New Stock Market Dataset, Financial Task {\&} Hyperbolic Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.405/
Sawhney, Ramit and Agarwal, Shivam and Mittal, Vivek and Rosso, Paolo and Nanda, Vikram and Chava, Sudheer
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5531--5545
The rapid spread of information over social media influences quantitative trading and investments. The growing popularity of speculative trading of highly volatile assets such as cryptocurrencies and meme stocks presents a fresh challenge in the financial realm. Investigating such {\textquotedblleft}bubbles{\textquotedblright} - periods of sudden anomalous behavior of markets are critical in better understanding investor behavior and market dynamics. However, high volatility coupled with massive volumes of chaotic social media texts, especially for underexplored assets like cryptocoins pose a challenge to existing methods. Taking the first step towards NLP for cryptocoins, we present and publicly release CryptoBubbles, a novel multi- span identification task for bubble detection, and a dataset of more than 400 cryptocoins from 9 exchanges over five years spanning over two million tweets. Further, we develop a set of sequence-to-sequence hyperbolic models suited to this multi-span identification task based on the power-law dynamics of cryptocurrencies and user behavior on social media. We further test the effectiveness of our models under zero-shot settings on a test set of Reddit posts pertaining to 29 {\textquotedblleft}meme stocks{\textquotedblright}, which see an increase in trade volume due to social media hype. Through quantitative, qualitative, and zero-shot analyses on Reddit and Twitter spanning cryptocoins and meme-stocks, we show the practical applicability of CryptoBubbles and hyperbolic models.
null
null
10.18653/v1/2022.naacl-main.405
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,104
inproceedings
yang-etal-2022-nearest
Nearest Neighbor Knowledge Distillation for Neural Machine Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.406/
Yang, Zhixian and Sun, Renliang and Wan, Xiaojun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5546--5556
k-nearest-neighbor machine translation ($k$NN-MT), proposed by Khandelwal et al. (2021), has achieved many state-of-the-art results in machine translation tasks. Although effective, $k$NN-MT requires conducting $k$NN searches through the large datastore for each decoding step during inference, prohibitively increasing the decoding cost and thus leading to the difficulty for the deployment in real-world applications. In this paper, we propose to move the time-consuming $k$NN search forward to the preprocessing phase, and then introduce $k$ Nearest Neighbor Knowledge Distillation ($k$NN-KD) that trains the base NMT model to directly learn the knowledge of $k$NN. Distilling knowledge retrieved by $k$NN can encourage the NMT model to take more reasonable target tokens into consideration, thus addressing the overcorrection problem. Extensive experimental results show that, the proposed method achieves consistent improvement over the state-of-the-art baselines including $k$NN-MT, while maintaining the same training and decoding speed as the standard NMT model.
null
null
10.18653/v1/2022.naacl-main.406
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,105
inproceedings
gururangan-etal-2022-demix
{DEM}ix Layers: Disentangling Domains for Modular Language Modeling
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.407/
Gururangan, Suchin and Lewis, Mike and Holtzman, Ari and Smith, Noah A. and Zettlemoyer, Luke
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5557--5576
We introduce a new domain expert mixture (DEMix) layer that enables conditioning a language model (LM) on the domain of the input text. A DEMix layer includes a collection of expert feedforward networks, each specialized to a domain, that makes the LM modular: experts can be mixed, added, or removed after initial training. Extensive experiments with autoregressive transformer LMs (up to 1.3B parameters) show that DEMix layers reduce test-time perplexity (especially for out-of-domain data), increase training efficiency, and enable rapid adaptation. Mixing experts during inference, using a parameter-free weighted ensemble, enables better generalization to heterogeneous or unseen domains. We also show it is possible to add experts to adapt to new domains without forgetting older ones, and remove experts to restrict access to unwanted domains. Overall, these results demonstrate benefits of domain modularity in language models.
null
null
10.18653/v1/2022.naacl-main.407
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,106
inproceedings
jian-etal-2022-contrastive
Contrastive Learning for Prompt-based Few-shot Language Learners
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.408/
Jian, Yiren and Gao, Chongyang and Vosoughi, Soroush
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5577--5587
The impressive performance of GPT-3 using natural language prompts and in-context learning has inspired work on better fine-tuning of moderately-sized models under this paradigm. Following this line of work, we present a contrastive learning framework that clusters inputs from the same class for better generality of models trained with only limited examples. Specifically, we propose a supervised contrastive framework that clusters inputs from the same class under different augmented {\textquotedblleft}views{\textquotedblright} and repel the ones from different classes. We create different {\textquotedblleft}views{\textquotedblright} of an example by appending it with different language prompts and contextual demonstrations. Combining a contrastive loss with the standard masked language modeling (MLM) loss in prompt-based few-shot learners, the experimental results show that our method can improve over the state-of-the-art methods in a diverse set of 15 language tasks. Our framework makes minimal assumptions on the task or the base model, and can be applied to many recent methods with little modification.
null
null
10.18653/v1/2022.naacl-main.408
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,107
inproceedings
guzman-nateras-etal-2022-cross
Cross-Lingual Event Detection via Optimized Adversarial Training
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.409/
Guzman-Nateras, Luis and Nguyen, Minh Van and Nguyen, Thien
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5588--5599
In this work, we focus on Cross-Lingual Event Detection where a model is trained on data from a $\textit{source}$ language but its performance is evaluated on data from a second, $\textit{target}$, language. Most recent works in this area have harnessed the language-invariant qualities displayed by pre-trained Multi-lingual Language Models. Their performance, however, reveals there is room for improvement as the cross-lingual setting entails particular challenges. We employ Adversarial Language Adaptation to train a Language Discriminator to discern between the source and target languages using unlabeled data. The discriminator is trained in an adversarial manner so that the encoder learns to produce refined, language-invariant representations that lead to improved performance. More importantly, we optimize the adversarial training process by only presenting the discriminator with the most informative samples. We base our intuition about what makes a sample informative on two disparate metrics: sample similarity and event presence. Thus, we propose leveraging Optimal Transport as a solution to naturally combine these two distinct information sources into the selection process. Extensive experiments on 8 different language pairs, using 4 languages from unrelated families, show the flexibility and effectiveness of our model that achieves state-of-the-art results.
null
null
10.18653/v1/2022.naacl-main.409
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,108
inproceedings
wiegand-etal-2022-identifying
Identifying Implicitly Abusive Remarks about Identity Groups using a Linguistically Informed Approach
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.410/
Wiegand, Michael and Eder, Elisabeth and Ruppenhofer, Josef
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5600--5612
We address the task of distinguishing implicitly abusive sentences on identity groups ({\textquotedblleft}Muslims contaminate our planet{\textquotedblright}) from other group-related negative polar sentences ({\textquotedblleft}Muslims despise terrorism{\textquotedblright}). Implicitly abusive language are utterances not conveyed by abusive words (e.g. {\textquotedblleft}bimbo{\textquotedblright} or {\textquotedblleft}scum{\textquotedblright}). So far, the detection of such utterances could not be properly addressed since existing datasets displaying a high degree of implicit abuse are fairly biased. Following the recently-proposed strategy to solve implicit abuse by separately addressing its different subtypes, we present a new focused and less biased dataset that consists of the subtype of atomic negative sentences about identity groups. For that task, we model components that each address one facet of such implicit abuse, i.e. depiction as perpetrators, aspectual classification and non-conformist views. The approach generalizes across different identity groups and languages.
null
null
10.18653/v1/2022.naacl-main.410
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,109
inproceedings
zhang-etal-2022-label-definitions
Label Definitions Improve Semantic Role Labeling
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.411/
Zhang, Li and Jindal, Ishan and Li, Yunyao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5613--5620
Argument classification is at the core of Semantic Role Labeling. Given a sentence and the predicate, a semantic role label is assigned to each argument of the predicate. While semantic roles come with meaningful definitions, existing work has treated them as symbolic. Learning symbolic labels usually requires ample training data, which is frequently unavailable due to the cost of annotation. We instead propose to retrieve and leverage the definitions of these labels from the annotation guidelines. For example, the verb predicate {\textquotedblleft}work{\textquotedblright} has arguments defined as {\textquotedblleft}worker{\textquotedblright}, {\textquotedblleft}job{\textquotedblright}, {\textquotedblleft}employer{\textquotedblright}, etc. Our model achieves state-of-the-art performance on the CoNLL09 dataset injected with label definitions given the predicate senses. The performance improvement is even more pronounced in low-resource settings when training data is scarce.
null
null
10.18653/v1/2022.naacl-main.411
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,110
inproceedings
jin-etal-2022-shedding
Shedding New Light on the Language of the Dark Web
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.412/
Jin, Youngjin and Jang, Eugene and Lee, Yongjae and Shin, Seungwon and Chung, Jin-Woo
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5621--5637
The hidden nature and the limited accessibility of the Dark Web, combined with the lack of public datasets in this domain, make it difficult to study its inherent characteristics such as linguistic properties. Previous works on text classification of Dark Web domain have suggested that the use of deep neural models may be ineffective, potentially due to the linguistic differences between the Dark and Surface Webs. However, not much work has been done to uncover the linguistic characteristics of the Dark Web. This paper introduces CoDA, a publicly available Dark Web dataset consisting of 10000 web documents tailored towards text-based Dark Web analysis. By leveraging CoDA, we conduct a thorough linguistic analysis of the Dark Web and examine the textual differences between the Dark Web and the Surface Web. We also assess the performance of various methods of Dark Web page classification. Finally, we compare CoDA with an existing public Dark Web dataset and evaluate their suitability for various use cases.
null
null
10.18653/v1/2022.naacl-main.412
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,111
inproceedings
daoud-etal-2022-conceptualizing
Conceptualizing Treatment Leakage in Text-based Causal Inference
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.413/
Daoud, Adel and Jerzak, Connor and Johansson, Richard
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5638--5645
Causal inference methods that control for text-based confounders are becoming increasingly important in the social sciences and other disciplines where text is readily available. However, these methods rely on a critical assumption that there is no treatment leakage: that is, the text only contains information about the confounder and no information about treatment assignment. When this assumption does not hold, methods that control for text to adjust for confounders face the problem of post-treatment (collider) bias. However, the assumption that there is no treatment leakage may be unrealistic in real-world situations involving text, as human language is rich and flexible. Language appearing in a public policy document or health records may refer to the future and the past simultaneously, and thereby reveal information about the treatment assignment. In this article, we define the treatment-leakage problem, and discuss the identification as well as the estimation challenges it raises. Second, we delineate the conditions under which leakage can be addressed by removing the treatment-related signal from the text in a pre-processing step we define as text distillation. Lastly, using simulation, we show how treatment leakage introduces a bias in estimates of the average treatment effect (ATE) and how text distillation can mitigate this bias.
null
null
10.18653/v1/2022.naacl-main.413
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,112
inproceedings
park-etal-2022-consistency
Consistency Training with Virtual Adversarial Discrete Perturbation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.414/
Park, Jungsoo and Kim, Gyuwan and Kang, Jaewoo
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5646--5656
Consistency training regularizes a model by enforcing predictions of original and perturbed inputs to be similar. Previous studies have proposed various augmentation methods for the perturbation but are limited in that they are agnostic to the training model. Thus, the perturbed samples may not aid in regularization due to their ease of classification from the model. In this context, we propose an augmentation method of adding a discrete noise that would incur the highest divergence between predictions. This virtual adversarial discrete noise obtained by replacing a small portion of tokens while keeping original semantics as much as possible efficiently pushes a training model`s decision boundary. Experimental results show that our proposed method outperforms other consistency training baselines with text editing, paraphrasing, or a continuous noise on semi-supervised text classification tasks and a robustness benchmark.
null
null
10.18653/v1/2022.naacl-main.414
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,113
inproceedings
tang-etal-2022-confit
{CONFIT}: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.415/
Tang, Xiangru and Nair, Arjun and Wang, Borui and Wang, Bingyao and Desai, Jai and Wade, Aaron and Li, Haoran and Celikyilmaz, Asli and Mehdad, Yashar and Radev, Dragomir
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5657--5668
Factual inconsistencies in generated summaries severely limit the practical applications of abstractive dialogue summarization. Although significant progress has been achieved by using pre-trained neural language models, substantial amounts of hallucinated content are found during the human evaluation. In this work, we first devised a typology of factual errors to better understand the types of hallucinations generated by current models and conducted human evaluation on popular dialog summarization dataset. We further propose a training strategy that improves the factual consistency and overall quality of summaries via a novel contrastive fine-tuning, called CONFIT. To tackle top factual errors from our annotation, we introduce additional contrastive loss with carefully designed hard negative samples and self-supervised dialogue-specific loss to capture the key information between speakers. We show that our model significantly reduces all kinds of factual errors on both SAMSum dialogue summarization and AMI meeting summarization. On both datasets, we achieve significant improvements over state-of-the-art baselines using both automatic metrics, ROUGE and BARTScore, and human evaluation.
null
null
10.18653/v1/2022.naacl-main.415
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,114
inproceedings
lee-lee-2022-compm
{C}o{MPM}: Context Modeling with Speaker`s Pre-trained Memory Tracking for Emotion Recognition in Conversation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.416/
Lee, Joosung and Lee, Wooin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5669--5679
As the use of interactive machines grow, the task of Emotion Recognition in Conversation (ERC) became more important. If the machine-generated sentences reflect emotion, more human-like sympathetic conversations are possible. Since emotion recognition in conversation is inaccurate if the previous utterances are not taken into account, many studies reflect the dialogue context to improve the performances. Many recent approaches show performance improvement by combining knowledge into modules learned from external structured data. However, structured data is difficult to access in non-English languages, making it difficult to extend to other languages. Therefore, we extract the pre-trained memory using the pre-trained language model as an extractor of external knowledge. We introduce CoMPM, which combines the speaker`s pre-trained memory with the context model, and find that the pre-trained memory significantly improves the performance of the context model. CoMPM achieves the first or second performance on all data and is state-of-the-art among systems that do not leverage structured data. In addition, our method shows that it can be extended to other languages because structured knowledge is not required, unlike previous methods. Our code is available on github .
null
null
10.18653/v1/2022.naacl-main.416
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,115
inproceedings
tang-etal-2022-investigating
Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.417/
Tang, Xiangru and Fabbri, Alexander and Li, Haoran and Mao, Ziming and Adams, Griffin and Wang, Borui and Celikyilmaz, Asli and Mehdad, Yashar and Radev, Dragomir
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5680--5692
Current pre-trained models applied for summarization are prone to factual inconsistencies that misrepresent the source text. Evaluating the factual consistency of summaries is thus necessary to develop better models. However, the human evaluation setup for evaluating factual consistency has not been standardized. To determine the factors that affect the reliability of the human evaluation, we crowdsource evaluations for factual consistency across state-of-the-art models on two news summarization datasets using the rating-based Likert Scale and ranking-based Best-Worst Scaling. Our analysis reveals that the ranking-based Best-Worst Scaling offers a more reliable measure of summary quality across datasets and that the reliability of Likert ratings highly depends on the target dataset and the evaluation design. To improve crowdsourcing reliability, we extend the scale of the Likert rating and present a scoring algorithm for Best-Worst Scaling that we call value learning. Our crowdsourcing guidelines will be publicly available to facilitate future work on factual consistency in summarization.
null
null
10.18653/v1/2022.naacl-main.417
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,116
inproceedings
gao-wan-2022-dialsummeval
{D}ial{S}umm{E}val: Revisiting Summarization Evaluation for Dialogues
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.418/
Gao, Mingqi and Wan, Xiaojun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5693--5709
Dialogue summarization is receiving increasing attention from researchers due to its extraordinary difficulty and unique application value. We observe that current dialogue summarization models have flaws that may not be well exposed by frequently used metrics such as ROUGE. In our paper, we re-evaluate 18 categories of metrics in terms of four dimensions: coherence, consistency, fluency and relevance, as well as a unified human evaluation of various models for the first time. Some noteworthy trends which are different from the conventional summarization tasks are identified. We will release DialSummEval, a multi-faceted dataset of human judgments containing the outputs of 14 models on SAMSum.
null
null
10.18653/v1/2022.naacl-main.418
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,117
inproceedings
song-etal-2022-hyperbolic
Hyperbolic Relevance Matching for Neural Keyphrase Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.419/
Song, Mingyang and Feng, Yi and Jing, Liping
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5710--5720
Keyphrase extraction is a fundamental task in natural language processing that aims to extract a set of phrases with important information from a source document. Identifying important keyphrases is the central component of keyphrase extraction, and its main challenge is learning to represent information comprehensively and discriminate importance accurately. In this paper, to address the above issues, we design a new hyperbolic matching model (HyperMatch) to explore keyphrase extraction in hyperbolic space. Concretely, to represent information comprehensively, HyperMatch first takes advantage of the hidden representations in the middle layers of RoBERTa and integrates them as the word embeddings via an adaptive mixing layer to capture the hierarchical syntactic and semantic structures. Then, considering the latent structure information hidden in natural languages, HyperMatch embeds candidate phrases and documents in the same hyperbolic space via a hyperbolic phrase encoder and a hyperbolic document encoder. To discriminate importance accurately, HyperMatch estimates the importance of each candidate phrase by explicitly modeling the phrase-document relevance via the Poincar{\'e} distance and optimizes the whole model by minimizing the hyperbolic margin-based triplet loss. Extensive experiments are conducted on six benchmark datasets and demonstrate that HyperMatch outperforms the recent state-of-the-art baselines.
null
null
10.18653/v1/2022.naacl-main.419
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,118
inproceedings
ma-etal-2022-template
Template-free Prompt Tuning for Few-shot {NER}
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.420/
Ma, Ruotian and Zhou, Xin and Gui, Tao and Tan, Yiding and Li, Linyang and Zhang, Qi and Huang, Xuanjing
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5721--5732
Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words. However, when applied to token-level labeling tasks such as NER, it would be time-consuming to enumerate the template queries over all potential entity spans. In this work, we propose a more elegant method to reformulate NER tasks as LM problems without any templates. Specifically, we discard the template construction process while maintaining the word prediction paradigm of pre-training models to predict a class-related pivot word (or label word) at the entity position. Meanwhile, we also explore principled ways to automatically search for appropriate label words that the pre-trained models can easily adapt to. While avoiding the complicated template-based process, the proposed LM objective also reduces the gap between different objectives used in pre-training and fine-tuning, thus it can better benefit the few-shot performance. Experimental results demonstrate the effectiveness of the proposed method over bert-tagger and template-based method under few-shot settings. Moreover, the decoding speed of the proposed method is up to 1930.12 times faster than the template-based method.
null
null
10.18653/v1/2022.naacl-main.420
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,119
inproceedings
popovic-farber-2022-shot
Few-Shot Document-Level Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.421/
Popovic, Nicholas and F{\"arber, Michael
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5733--5746
We present FREDo, a few-shot document-level relation extraction (FSDLRE) benchmark. As opposed to existing benchmarks which are built on sentence-level relation extraction corpora, we argue that document-level corpora provide more realism, particularly regarding none-of-the-above (NOTA) distributions. Therefore, we propose a set of FSDLRE tasks and construct a benchmark based on two existing supervised learning data sets, DocRED and sciERC. We adapt the state-of-the-art sentence-level method MNAV to the document-level and develop it further for improved domain adaptation. We find FSDLRE to be a challenging setting with interesting new characteristics such as the ability to sample NOTA instances from the support set. The data, code, and trained models are available online (\url{https://github.com/nicpopovic/FREDo}).
null
null
10.18653/v1/2022.naacl-main.421
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,120
inproceedings
ji-etal-2022-lamemo
{L}a{M}emo: Language Modeling with Look-Ahead Memory
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.422/
Ji, Haozhe and Zhang, Rongsheng and Yang, Zhenyu and Hu, Zhipeng and Huang, Minlie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5747--5762
Although Transformers with fully connected self-attentions are powerful to model long-term dependencies, they are struggling to scale to long texts with thousands of words in language modeling. One of the solutions is to equip the model with a recurrence memory. However, existing approaches directly reuse hidden states from the previous segment that encodes contexts in a uni-directional way. As a result, this prohibits the memory to dynamically interact with the current context that provides up-to-date information for token prediction. To remedy this issue, we propose Look-Ahead Memory (LaMemo) that enhances the recurrence memory by incrementally attending to the right-side tokens and interpolating with the old memory states to maintain long-term information in the history. LaMemo embraces bi-directional attention and segment recurrence with an additional computation overhead only linearly proportional to the memory length. Experiments on widely used language modeling benchmarks demonstrate its superiority over the baselines equipped with different types of memory mechanisms.
null
null
10.18653/v1/2022.naacl-main.422
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,121
inproceedings
felhi-etal-2022-exploiting
Exploiting Inductive Bias in Transformers for Unsupervised Disentanglement of Syntax and Semantics with {VAE}s
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.423/
Felhi, Ghazi and Le Roux, Joseph and Seddah, Djam{\'e}
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5763--5776
We propose a generative model for text generation, which exhibits disentangled latent representations of syntax and semantics. Contrary to previous work, this model does not need syntactic information such as constituency parses, or semantic information such as paraphrase pairs. Our model relies solely on the inductive bias found in attention-based architectures such as Transformers. In the attention of Transformers, $keys$ handle information selection while $values$ specify what information is conveyed. Our model, dubbed QKVAE, uses Attention in its decoder to read latent variables where one latent variable infers keys while another infers values. We run experiments on latent representations and experiments on syntax/semantics transfer which show that QKVAE displays clear signs of disentangled syntax and semantics. We also show that our model displays competitive syntax transfer capabilities when compared to supervised models and that comparable supervised models need a fairly large amount of data (more than 50K samples) to outperform it on both syntactic and semantic transfer. The code for our experiments is publicly available.
null
null
10.18653/v1/2022.naacl-main.423
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,122
inproceedings
zeng-etal-2022-neighbors
Neighbors Are Not Strangers: Improving Non-Autoregressive Translation under Low-Frequency Lexical Constraints
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.424/
Zeng, Chun and Chen, Jiangjie and Zhuang, Tianyi and Xu, Rui and Yang, Hao and Ying, Qin and Tao, Shimin and Xiao, Yanghua
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5777--5790
Lexically constrained neural machine translation (NMT) draws much industrial attention for its practical usage in specific domains. However, current autoregressive approaches suffer from high latency. In this paper, we focus on non-autoregressive translation (NAT) for this problem for its efficiency advantage. We identify that current constrained NAT models, which are based on iterative editing, do not handle low-frequency constraints well. To this end, we propose a plug-in algorithm for this line of work, i.e., Aligned Constrained Training (ACT), which alleviates this problem by familiarizing the model with the source-side context of the constraints. Experiments on the general and domain datasets show that our model improves over the backbone constrained NAT model in constraint preservation and translation quality, especially for rare constraints.
null
null
10.18653/v1/2022.naacl-main.424
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,123
inproceedings
henlein-mehler-2022-toothbrushes
What do Toothbrushes do in the Kitchen? How Transformers Think our World is Structured
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.425/
Henlein, Alexander and Mehler, Alexander
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5791--5807
Transformer-based models are now predominant in NLP.They outperform approaches based on static models in many respects. This success has in turn prompted research that reveals a number of biases in the language models generated by transformers. In this paper we utilize this research on biases to investigate to what extent transformer-based language models allow for extracting knowledge about object relations (X occurs in Y; X consists of Z; action A involves using X).To this end, we compare contextualized models with their static counterparts. We make this comparison dependent on the application of a number of similarity measures and classifiers. Our results are threefold:Firstly, we show that the models combined with the different similarity measures differ greatly in terms of the amount of knowledge they allow for extracting. Secondly, our results suggest that similarity measures perform much worse than classifier-based approaches. Thirdly, we show that, surprisingly, static models perform almost as well as contextualized models {--} in some cases even better.
null
null
10.18653/v1/2022.naacl-main.425
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,124
inproceedings
zhong-etal-2022-less
Less is More: Learning to Refine Dialogue History for Personalized Dialogue Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.426/
Zhong, Hanxun and Dou, Zhicheng and Zhu, Yutao and Qian, Hongjin and Wen, Ji-Rong
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5808--5820
Personalized dialogue systems explore the problem of generating responses that are consistent with the user`s personality, which has raised much attention in recent years. Existing personalized dialogue systems have tried to extract user profiles from dialogue history to guide personalized response generation. Since the dialogue history is usually long and noisy, most existing methods truncate the dialogue history to model the user`s personality. Such methods can generate some personalized responses, but a large part of dialogue history is wasted, leading to sub-optimal performance of personalized response generation. In this work, we propose to refine the user dialogue history on a large scale, based on which we can handle more dialogue history and obtain more abundant and accurate persona information. Specifically, we design an MSP model which consists of three personal information refiners and a personalized response generator. With these multi-level refiners, we can sparsely extract the most valuable information (tokens) from the dialogue history and leverage other similar users' data to enhance personalization. Experimental results on two real-world datasets demonstrate the superiority of our model in generating more informative and personalized responses.
null
null
10.18653/v1/2022.naacl-main.426
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,125
inproceedings
pacheco-etal-2022-holistic
A Holistic Framework for Analyzing the {COVID}-19 Vaccine Debate
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.427/
Pacheco, Maria Leonor and Islam, Tunazzina and Mahajan, Monal and Shor, Andrey and Yin, Ming and Ungar, Lyle and Goldwasser, Dan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5821--5839
The Covid-19 pandemic has led to infodemic of low quality information leading to poor health decisions. Combating the outcomes of this infodemic is not only a question of identifying false claims, but also reasoning about the decisions individuals make. In this work we propose a holistic analysis framework connecting stance and reason analysis, and fine-grained entity level moral sentiment analysis. We study how to model the dependencies between the different level of analysis and incorporate human insights into the learning process. Experiments show that our framework provides reliable predictions even in the low-supervision settings.
null
null
10.18653/v1/2022.naacl-main.427
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,126
inproceedings
liu-etal-2022-learning-win
Learning to Win Lottery Tickets in {BERT} Transfer via Task-agnostic Mask Training
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.428/
Liu, Yuanxin and Meng, Fandong and Lin, Zheng and Fu, Peng and Cao, Yanan and Wang, Weiping and Zhou, Jie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5840--5857
Recent studies on the lottery ticket hypothesis (LTH) show that pre-trained language models (PLMs) like BERT contain matching subnetworks that have similar transfer learning performance as the original PLM. These subnetworks are found using magnitude-based pruning. In this paper, we find that the BERT subnetworks have even more potential than these studies have shown. Firstly, we discover that the success of magnitude pruning can be attributed to the preserved pre-training performance, which correlates with the downstream transferability. Inspired by this, we propose to directly optimize the subnetwork structure towards the pre-training objectives, which can better preserve the pre-training performance. Specifically, we train binary masks over model weights on the pre-training tasks, with the aim of preserving the universal transferability of the subnetwork, which is agnostic to any specific downstream tasks. We then fine-tune the subnetworks on the GLUE benchmark and the SQuAD dataset. The results show that, compared with magnitude pruning, mask training can effectively find BERT subnetworks with improved overall performance on downstream tasks. Moreover, our method is also more efficient in searching subnetworks and more advantageous when fine-tuning within a certain range of data scarcity. Our code is available at \url{https://github.com/llyx97/TAMT}.
null
null
10.18653/v1/2022.naacl-main.428
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,127
inproceedings
li-etal-2022-dont
You Don`t Know My Favorite Color: Preventing Dialogue Representations from Revealing Speakers' Private Personas
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.429/
Li, Haoran and Song, Yangqiu and Fan, Lixin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5858--5870
Social chatbots, also known as chit-chat chatbots, evolve rapidly with large pretrained language models. Despite the huge progress, privacy concerns have arisen recently: training data of large language models can be extracted via model inversion attacks. On the other hand, the datasets used for training chatbots contain many private conversations between two individuals. In this work, we further investigate the privacy leakage of the hidden states of chatbots trained by language modeling which has not been well studied yet. We show that speakers' personas can be inferred through a simple neural network with high accuracy. To this end, we propose effective defense objectives to protect persona leakage from hidden states. We conduct extensive experiments to demonstrate that our proposed defense objectives can greatly reduce the attack accuracy from 37.6{\%} to 0.5{\%}. Meanwhile, the proposed objectives preserve language models' powerful generation ability.
null
null
10.18653/v1/2022.naacl-main.429
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,128
inproceedings
khalid-lee-2022-explaining
Explaining Dialogue Evaluation Metrics using Adversarial Behavioral Analysis
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.430/
Khalid, Baber and Lee, Sungjin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5871--5883
There is an increasing trend in using neural methods for dialogue model evaluation. Lack of a framework to investigate these metrics can cause dialogue models to reflect their biases and cause unforeseen problems during interactions. In this work, we propose an adversarial test-suite which generates problematic variations of various dialogue aspects, e.g. logical entailment, using automatic heuristics. We show that dialogue metrics for both open-domain and task-oriented settings are biased in their assessments of different conversation behaviors and fail to properly penalize problematic conversations, by analyzing their assessments of these problematic examples. We conclude that variability in training methodologies and data-induced biases are some of the main causes of these problems. We also conduct an investigation into the metric behaviors using a black-box interpretability model which corroborates our findings and provides evidence that metrics pay attention to the problematic conversational constructs signaling a misunderstanding of different conversation semantics.
null
null
10.18653/v1/2022.naacl-main.430
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,129
inproceedings
sap-etal-2022-annotators
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.431/
Sap, Maarten and Swayamdipta, Swabha and Vianna, Laura and Zhou, Xuhui and Choi, Yejin and Smith, Noah A.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5884--5906
The perceived toxicity of language can vary based on someone`s identity and beliefs, but this variation is often ignored when collecting toxic language datasets, resulting in dataset and model biases. We seek to understand the *who*, *why*, and *what* behind biases in toxicity annotations. In two online studies with demographically and politically diverse participants, we investigate the effect of annotator identities (*who*) and beliefs (*why*), drawing from social psychology research about hate speech, free speech, racist beliefs, political leaning, and more. We disentangle *what* is annotated as toxic by considering posts with three characteristics: anti-Black language, African American English (AAE) dialect, and vulgarity. Our results show strong associations between annotator identity and beliefs and their ratings of toxicity. Notably, more conservative annotators and those who scored highly on our scale for racist beliefs were less likely to rate anti-Black language as toxic, but more likely to rate AAE as toxic. We additionally present a case study illustrating how a popular toxicity detection system`s ratings inherently reflect only specific beliefs and perspectives. Our findings call for contextualizing toxicity labels in social variables, which raises immense implications for toxic language annotation and detection.
null
null
10.18653/v1/2022.naacl-main.431
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,130
inproceedings
fang-etal-2022-non
Non-Autoregressive {C}hinese {ASR} Error Correction with Phonological Training
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.432/
Fang, Zheng and Zhang, Ruiqing and He, Zhongjun and Wu, Hua and Cao, Yanan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5907--5917
Automatic Speech Recognition (ASR) is an efficient and widely used input method that transcribes speech signals into text. As the errors introduced by ASR systems will impair the performance of downstream tasks, we introduce a post-processing error correction method, PhVEC, to correct errors in text space. For the errors in ASR result, existing works mainly focus on fixed-length corrections, modifying each wrong token to a correct one (one-to-one correction), but rarely consider the variable-length correction (one-to-many or many-to-one correction). In this paper, we propose an efficient non-autoregressive (NAR) method for Chinese ASR error correction for both cases. Instead of conventionally predicting the sentence length in NAR methods, we propose a novel approach that uses phonological tokens to extend the source sentence for variable-length correction, enabling our model to generate phonetically similar corrections. Experimental results on datasets of different domains show that our method achieves significant improvement in word error rate reduction and speeds up the inference by 6.2 times compared with the autoregressive model.
null
null
10.18653/v1/2022.naacl-main.432
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,131
inproceedings
yu-etal-2022-hate
Hate Speech and Counter Speech Detection: Conversational Context Does Matter
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.433/
Yu, Xinchen and Blanco, Eduardo and Hong, Lingzi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
5918--5930
Hate speech is plaguing the cyberspace along with user-generated content. Adding counter speech has become an effective way to combat hate speech online. Existing datasets and models target either (a) hate speech or (b) hate and counter speech but disregard the context. This paper investigates the role of context in the annotation and detection of online hate and counter speech, where context is defined as the preceding comment in a conversation thread. We created a context-aware dataset for a 3-way classification task on Reddit comments: hate speech, counter speech, or neutral. Our analyses indicate that context is critical to identify hate and counter speech: human judgments change for most comments depending on whether we show annotators the context. A linguistic analysis draws insights into the language people use to express hate and counter speech. Experimental results show that neural networks obtain significantly better results if context is taken into account. We also present qualitative error analyses shedding light into (a) when and why context is beneficial and (b) the remaining errors made by our best model when context is taken into account.
null
null
10.18653/v1/2022.naacl-main.433
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,132