entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
vig-etal-2022-exploring
Exploring Neural Models for Query-Focused Summarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.109/
Vig, Jesse and Fabbri, Alexander and Kryscinski, Wojciech and Wu, Chien-Sheng and Liu, Wenhao
Findings of the Association for Computational Linguistics: NAACL 2022
1455--1468
Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization. While recently released datasets, such as QMSum or AQuaMuSe, facilitate research efforts in QFS, the field lacks a comprehensive study of the broad space of applicable modeling methods. In this paper we conduct a systematic exploration of neural approaches to QFS, considering two general classes of methods: two-stage extractive-abstractive solutions and end-to-end models. Within those categories, we investigate existing models and explore strategies for transfer learning. We also present two modeling extensions that achieve state-of-the-art performance on the QMSum dataset, up to a margin of 3.38 ROUGE-1, 3.72 ROUGE2, and 3.28 ROUGE-L when combined with transfer learning strategies. Results from human evaluation suggest that the best models produce more comprehensive and factually consistent summaries compared to a baseline model. Code and checkpoints are made publicly available: \url{https://github.com/salesforce/query-focused-sum}.
null
null
10.18653/v1/2022.findings-naacl.109
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,366
inproceedings
briakou-etal-2022-bitextedit
{B}itext{E}dit: Automatic Bitext Editing for Improved Low-Resource Machine Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.110/
Briakou, Eleftheria and Wang, Sida and Zettlemoyer, Luke and Ghazvininejad, Marjan
Findings of the Association for Computational Linguistics: NAACL 2022
1469--1485
Mined bitexts can contain imperfect translations that yield unreliable training signals for Neural Machine Translation (NMT). While filtering such pairs out is known to improve final model quality, we argue that it is suboptimal in low-resource conditions where even mined data can be limited. In our work, we propose instead, to refine the mined bitexts via automatic editing: given a sentence in a language $x_f$, and a possibly imperfect translation of it $\mathbf{x_e}$, our model generates a revised version $x_f'$ or $x_e'$ that yields a more equivalent translation pair (i.e., {\ensuremath{<}}$x_f, x_e'${\ensuremath{>}} or {\ensuremath{<}}$x_f', x_e${\ensuremath{>}}). We use a simple editing strategy by (1) mining potentially imperfect translations for each sentence in a given bitext, (2) learning a model to reconstruct the original translations and translate, in a multi-task fashion. Experiments demonstrate that our approach successfully improves the quality of CCMatrix mined bitext for 5 low-resource language-pairs and 10 translation directions by up to 8 BLEU points, in most cases improving upon a competitive translation-based baseline.
null
null
10.18653/v1/2022.findings-naacl.110
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,367
inproceedings
murakhovska-etal-2022-mixqg
{M}ix{QG}: Neural Question Generation with Mixed Answer Types
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.111/
Murakhovs{'}ka, Lidiya and Wu, Chien-Sheng and Laban, Philippe and Niu, Tong and Liu, Wenhao and Xiong, Caiming
Findings of the Association for Computational Linguistics: NAACL 2022
1486--1497
Asking good questions is an essential ability for both human and machine intelligence. However, existing neural question generation approaches mainly focus on short factoid type of answers. In this paper, we introduce a neural question generator, MixQG, to bridge this gap. We combine nine question answering datasets with diverse answer types, including yes/no, multiple-choice, extractive, and abstractive answers, to train a single generative model. We show with empirical results that our model outperforms existing work in both seen and unseen domains, and can generate questions with different cognitive levels when conditioned on different answer types. We run a human evaluation study to assess the quality of generated questions and find that MixQG outperforms the next best model by 10{\%}. Our code and model checkpoints will be released and integrated with the HuggingFace library to facilitate various downstream applications.
null
null
10.18653/v1/2022.findings-naacl.111
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,368
inproceedings
rosin-radinsky-2022-temporal
Temporal Attention for Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.112/
Rosin, Guy D. and Radinsky, Kira
Findings of the Association for Computational Linguistics: NAACL 2022
1498--1508
Pretrained language models based on the transformer architecture have shown great success in NLP.Textual training data often comes from the web and is thus tagged with time-specific information, but most language models ignore this information. They are trained on the textual data alone, limiting their ability to generalize temporally. In this work, we extend the key component of the transformer architecture, i.e., the self-attention mechanism, and propose temporal attention - a time-aware self-attention mechanism. Temporal attention can be applied to any transformer model and requires the input texts to be accompanied with their relevant time points. This mechanism allows the transformer to capture this temporal information and create time-specific contextualized word representations. We leverage these representations for the task of semantic change detection; we apply our proposed mechanism to BERT and experiment on three datasets in different languages (English, German, and Latin) that also vary in time, size, and genre. Our proposed model achieves state-of-the-art results on all the datasets.
null
null
10.18653/v1/2022.findings-naacl.112
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,369
inproceedings
brazinskas-etal-2022-efficient
Efficient Few-Shot Fine-Tuning for Opinion Summarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.113/
Brazinskas, Arthur and Nallapati, Ramesh and Bansal, Mohit and Dreyer, Markus
Findings of the Association for Computational Linguistics: NAACL 2022
1509--1523
Abstractive summarization models are typically pre-trained on large amounts of generic texts, then fine-tuned on tens or hundreds of thousands of annotated samples. However, in opinion summarization, large annotated datasets of reviews paired with reference summaries are not available and would be expensive to create. This calls for fine-tuning methods robust to overfitting on small datasets. In addition, generically pre-trained models are often not accustomed to the specifics of customer reviews and, after fine-tuning, yield summaries with disfluencies and semantic mistakes. To address these problems, we utilize an efficient few-shot method based on adapters which, as we show, can easily store in-domain knowledge. Instead of fine-tuning the entire model, we add adapters and pre-train them in a task-specific way on a large corpus of unannotated customer reviews, using held-out reviews as pseudo summaries. Then, fine-tune the adapters on the small available human-annotated dataset. We show that this self-supervised adapter pre-training improves summary quality over standard fine-tuning by 2.0 and 1.3 ROUGE-L points on the Amazon and Yelp datasets, respectively. Finally, for summary personalization, we condition on aspect keyword queries, automatically created from generic datasets. In the same vein, we pre-train the adapters in a query-based manner on customer reviews and then fine-tune them on annotated datasets. This results in better-organized summary content reflected in improved coherence and fewer redundancies.
null
null
10.18653/v1/2022.findings-naacl.113
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,370
inproceedings
oguz-etal-2022-domain
Domain-matched Pre-training Tasks for Dense Retrieval
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.114/
Oguz, Barlas and Lakhotia, Kushal and Gupta, Anchit and Lewis, Patrick and Karpukhin, Vladimir and Piktus, Aleksandra and Chen, Xilun and Riedel, Sebastian and Yih, Scott and Gupta, Sonal and Mehdad, Yashar
Findings of the Association for Computational Linguistics: NAACL 2022
1524--1534
Pre-training on larger datasets with ever increasing model size isnow a proven recipe for increased performance across almost all NLP tasks.A notable exception is information retrieval, where additional pre-traininghas so far failed to produce convincing results. We show that, with theright pre-training setup, this barrier can be overcome. We demonstrate thisby pre-training large bi-encoder models on 1) a recently released set of 65 millionsynthetically generated questions, and 2) 200 million post-comment pairs from a preexisting dataset of Reddit conversations made available by pushshift.io. We evaluate on a set of information retrieval and dialogue retrieval benchmarks, showing substantial improvements over supervised baselines.
null
null
10.18653/v1/2022.findings-naacl.114
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,371
inproceedings
oguz-etal-2022-unik
{U}ni{K}-{QA}: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.115/
Oguz, Barlas and Chen, Xilun and Karpukhin, Vladimir and Peshterliev, Stan and Okhonko, Dmytro and Schlichtkrull, Michael and Gupta, Sonal and Mehdad, Yashar and Yih, Scott
Findings of the Association for Computational Linguistics: NAACL 2022
1535--1546
We study open-domain question answering with structured, unstructured and semi-structured knowledge sources, including text, tables, lists and knowledge bases. Departing from prior work, we propose a unifying approach that homogenizes all sources by reducing them to text and applies the retriever-reader model which has so far been limited to text sources only. Our approach greatly improves the results on knowledge-base QA tasks by 11 points, compared to latest graph-based methods. More importantly, we demonstrate that our unified knowledge (UniK-QA) model is a simple and yet effective way to combine heterogeneous sources of knowledge, advancing the state-of-the-art results on two popular question answering benchmarks, NaturalQuestions and WebQuestions, by 3.5 and 2.6 points, respectively. The code of UniK-QA is available at: \url{https://github.com/facebookresearch/UniK-QA}.
null
null
10.18653/v1/2022.findings-naacl.115
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,372
inproceedings
sekhon-etal-2022-white
White-box Testing of {NLP} models with Mask Neuron Coverage
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.116/
Sekhon, Arshdeep and Ji, Yangfeng and Dwyer, Matthew and Qi, Yanjun
Findings of the Association for Computational Linguistics: NAACL 2022
1547--1558
Recent literature has seen growing interest in using black-box strategies like for testing the behavior of NLP models. Research on white-box testing has developed a number of methods for evaluatinghow thoroughly the internal behavior of deep models is tested, but they are not applicableto NLP models. We propose a set of white-box testing methods that are customized for transformer-based NLP models. These include MASK NEURON COVERAGE (MNCOVER) that measures how thoroughlythe attention layers in models are exercised during testing. We show that MNCOVER can refine testing suites generated by CheckList by substantiallyreduce them in size, for more than 60{\%} on average, while retaining failing tests {--} thereby concentrating the faultdetection power of the test suite. Further we show how can be used to guide CheckList input generation,evaluate alternative NLP testing methods, and drive data augmentation to improve accuracy.
null
null
10.18653/v1/2022.findings-naacl.116
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,373
inproceedings
nawrot-etal-2022-hierarchical
Hierarchical Transformers Are More Efficient Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.117/
Nawrot, Piotr and Tworkowski, Szymon and Tyrolski, Micha{\l} and Kaiser, Lukasz and Wu, Yuhuai and Szegedy, Christian and Michalewski, Henryk
Findings of the Association for Computational Linguistics: NAACL 2022
1559--1571
Transformer models yield impressive results on many NLP and sequence modeling tasks. Remarkably, Transformers can handle long sequences, which allows them to produce long coherent outputs: entire paragraphs produced by GPT-3 or well-structured images produced by DALL-E. These large language models are impressive but also very inefficient and costly, which limits their applications and accessibility. We postulate that having an explicit hierarchical architecture is the key to Transformers that efficiently handle long sequences. To verify this claim, we first study different ways to downsample and upsample activations in Transformers so as to make them hierarchical. We use the best performing upsampling and downsampling layers to create Hourglass - a hierarchical Transformer language model. Hourglass improves upon the Transformer baseline given the same amount of computation and can yield the same results as Transformers more efficiently. In particular, Hourglass sets new state-of-the-art for Transformer models on the ImageNet32 generation task and improves language modeling efficiency on the widely studied enwik8 benchmark.
null
null
10.18653/v1/2022.findings-naacl.117
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,374
inproceedings
sharma-etal-2022-disarm
{DISARM}: Detecting the Victims Targeted by Harmful Memes
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.118/
Sharma, Shivam and Akhtar, Md Shad and Nakov, Preslav and Chakraborty, Tanmoy
Findings of the Association for Computational Linguistics: NAACL 2022
1572--1588
Internet memes have emerged as an increasingly popular means of communication on the web. Although memes are typically intended to elicit humour, they have been increasingly used to spread hatred, trolling, and cyberbullying, as well as to target specific individuals, communities, or society on political, socio-cultural, and psychological grounds. While previous work has focused on detecting harmful, hateful, and offensive memes in general, identifying whom these memes attack (i.e., the {\textquoteleft}victims') remains a challenging and underexplored area. We attempt to address this problem in this paper. To this end, we create a dataset in which we annotate each meme with its victim(s) such as the name of the targeted person(s), organization(s), and community(ies). We then propose DISARM (Detecting vIctimS targeted by hARmful Memes), a framework that uses named-entity recognition and person identification to detect all entities a meme is referring to, and then, incorporates a novel contextualized multimodal deep neural network to classify whether the meme intends to harm these entities. We perform several systematic experiments on three different test sets, corresponding to entities that are (i) all seen while training, (ii) not seen as a harmful target while training, and (iii) not seen at all while training. The evaluation shows that DISARM significantly outperforms 10 unimodal and multimodal systems. Finally, we demonstrate that DISARM is interpretable and comparatively more generalizable and that it can reduce the relative error rate of harmful target identification by up to 9 {\%} absolute over multimodal baseline systems.
null
null
10.18653/v1/2022.findings-naacl.118
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,375
inproceedings
liu-etal-2022-kd
{KD}-{VLP}: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.119/
Liu, Yongfei and Wu, Chenfei and Tseng, Shao-Yen and Lal, Vasudev and He, Xuming and Duan, Nan
Findings of the Association for Computational Linguistics: NAACL 2022
1589--1600
Self-supervised vision-and-language pretraining (VLP) aims to learn transferable multi-modal representations from large-scale image-text data and to achieve strong performances on a broad scope of vision-language tasks after finetuning. Previous mainstream VLP approaches typically adopt a two-step strategy relying on external object detectors to encode images in a multi-modal Transformer framework, which suffer from restrictive object concept space, limited image context and inefficient computation. In this paper, we propose an object-aware end-to-end VLP framework, which directly feeds image grid features from CNNs into the Transformer and learns the multi-modal representations jointly. More importantly, we propose to perform object knowledge distillation to facilitate learning cross-modal alignment at different semantic levels. To achieve that, we design two novel pretext tasks by taking object features and their semantic labels from external detectors as supervision: 1.) Object-guided masked vision modeling task focuses on enforcing object-aware representation learning in the multi-modal Transformer; 2.) Phrase-region alignment task aims to improve cross-modal alignment by utilizing the similarities between noun phrases and object labels in the linguistic space. Extensive experiments on a wide range of vision-language tasks demonstrate the efficacy of our proposed framework, and we achieve competitive or superior performances over the existing pretraining strategies.
null
null
10.18653/v1/2022.findings-naacl.119
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,376
inproceedings
guo-etal-2022-dependency
Dependency Position Encoding for Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.120/
Guo, Qiushi and Wang, Xin and Gao, Dehong
Findings of the Association for Computational Linguistics: NAACL 2022
1601--1606
Leveraging the dependency tree of the input sentence is able to improve the model performance for relation extraction. A challenging issue is how to remove confusions from the tree. Efforts have been made to utilize the dependency connections between words to selectively emphasize target-relevant information. However, these approaches are limited in focusing on exploiting dependency types. In this paper, we propose dependency position encoding (DPE), an efficient way of incorporating both dependency connections and dependency types into the self-attention mechanism to distinguish the importance of different word dependencies for the task. In contrast to previous studies that process input sentence and dependency information in separate streams, DPE can be seamlessly incorporated into the Transformer and makes it possible to use an one-stream scheme to extract relations between entity pairs. Extensive experiments show that models with our DPE significantly outperform the previous methods on SemEval 2010 Task 8, KBP37, and TACRED.
null
null
10.18653/v1/2022.findings-naacl.120
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,377
inproceedings
chen-etal-2022-good
Good Visual Guidance Make A Better Extractor: Hierarchical Visual Prefix for Multimodal Entity and Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.121/
Chen, Xiang and Zhang, Ningyu and Li, Lei and Yao, Yunzhi and Deng, Shumin and Tan, Chuanqi and Huang, Fei and Si, Luo and Chen, Huajun
Findings of the Association for Computational Linguistics: NAACL 2022
1607--1618
Multimodal named entity recognition and relation extraction (MNER and MRE) is a fundamental and crucial branch in information extraction. However, existing approaches for MNER and MRE usually suffer from error sensitivity when irrelevant object images incorporated in texts. To deal with these issues, we propose a novel Hierarchical Visual Prefix fusion NeTwork (HVPNeT) for visual-enhanced entity and relation extraction, aiming to achieve more effective and robust performance. Specifically, we regard visual representation as pluggable visual prefix to guide the textual representation for error insensitive forecasting decision. We further propose a dynamic gated aggregation strategy to achieve hierarchical multi-scaled visual features as visual prefix for fusion. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our method, and achieve state-of-the-art performance.
null
null
10.18653/v1/2022.findings-naacl.121
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,378
inproceedings
shaar-etal-2022-role
The Role of Context in Detecting Previously Fact-Checked Claims
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.122/
Shaar, Shaden and Alam, Firoj and Da San Martino, Giovanni and Nakov, Preslav
Findings of the Association for Computational Linguistics: NAACL 2022
1619--1631
Recent years have seen the proliferation of disinformation and fake news online. Traditional approaches to mitigate these issues is to use manual or automatic fact-checking. Recently, another approach has emerged: checking whether the input claim has previously been fact-checked, which can be done automatically, and thus fast, while also offering credibility and explainability, thanks to the human fact-checking and explanations in the associated fact-checking article. Here, we focus on claims made in a political debate and we study the impact of modeling the context of the claim: both on the source side, i.e., in the debate, as well as on the target side, i.e., in the fact-checking explanation document. We do this by modeling the local context, the global context, as well as by means of co-reference resolution, and multi-hop reasoning over the sentences of the document describing the fact-checked claim. The experimental results show that each of these represents a valuable information source, but that modeling the source-side context is most important, and can yield 10+ points of absolute improvement over a state-of-the-art model.
null
null
10.18653/v1/2022.findings-naacl.122
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,379
inproceedings
wu-etal-2022-pruning
Pruning Adatperfusion with Lottery Ticket Hypothesis
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.123/
Wu, Jiarun and Chen, Qingliang and Xiao, Zeguan and Gu, Yuliang and Sun, Mengsi
Findings of the Association for Computational Linguistics: NAACL 2022
1632--1646
Pre-trained language models have shown great success in multiple downstream tasks. However, they are computationally expensive to fine-tune. Thus, transfer learning with adapter modules has been introduced to alleviate this problem, helping to extract knowledge of the downstream tasks. Adapterfusion models are an example of the transformers-with-adapter-modules, which merge multiple adapters to incorporate knowledge from different tasks. However, merging multiple adapters will inevitably cause redundancies, increasing the training and inference time massively. Therefore, in this paper, we propose an approach to identify the influence of each adapter module and a novel way to prune adapters based on the prestigious Lottery Ticket Hypothesis. Experiments on GLUE datasets show that the pruned Adapterfusion model with our scheme can achieve state-of-the-art results, reducing sizes significantly while keeping performance intact.
null
null
10.18653/v1/2022.findings-naacl.123
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,380
inproceedings
spithourakis-etal-2022-evi
{EVI}: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.124/
Spithourakis, Georgios and Vuli{\'c}, Ivan and Lis, Micha{\l} and Casanueva, Inigo and Budzianowski, Pawe{\l}
Findings of the Association for Computational Linguistics: NAACL 2022
1647--1659
Knowledge-based authentication is crucial for task-oriented spoken dialogue systems that offer personalised and privacy-focused services. Such systems should be able to enrol (E), verify (V), and identify (I) new and recurring users based on their personal information, e.g. postcode, name, and date of birth. In this work, we formalise the three authentication tasks and their evaluation protocols, and we present EVI, a challenging spoken multilingual dataset with 5,506 dialogues in English, Polish, and French. Our proposed models set the first competitive benchmarks, explore the challenges of multilingual natural language processing of spoken dialogue, and set directions for future research.
null
null
10.18653/v1/2022.findings-naacl.124
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,381
inproceedings
jia-etal-2022-post
Post-Training Dialogue Summarization using Pseudo-Paraphrasing
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.125/
Jia, Qi and Liu, Yizhu and Tang, Haifeng and Zhu, Kenny
Findings of the Association for Computational Linguistics: NAACL 2022
1660--1669
Previous dialogue summarization techniques adapt large language models pretrained on the narrative text by injecting dialogue-specific features into the models. These features either require additional knowledge to recognize or make the resulting models harder to tune. To bridge the format gap between dialogues and narrative summaries in dialogue summarization tasks, we propose to post-train pretrained language models (PLMs) to rephrase from dialogue to narratives. After that, the model is fine-tuned for dialogue summarization as usual. Comprehensive experiments show that our approach significantly improves vanilla PLMs on dialogue summarization and outperforms other SOTA models by the summary quality and implementation costs.
null
null
10.18653/v1/2022.findings-naacl.125
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,382
inproceedings
liu-etal-2022-dual
A Dual-Channel Framework for Sarcasm Recognition by Detecting Sentiment Conflict
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.126/
Liu, Yiyi and Wang, Yequan and Sun, Aixin and Meng, Xuying and Li, Jing and Guo, Jiafeng
Findings of the Association for Computational Linguistics: NAACL 2022
1670--1680
Sarcasm employs ambivalence, where one says something positive but actually means negative, and vice versa. The essence of sarcasm, which is also a sufficient and necessary condition, is the conflict between literal and implied sentiments expressed in one sentence. However, it is difficult to recognize such sentiment conflict because the sentiments are mixed or even implicit. As a result, the recognition of sophisticated and obscure sentiment brings in a great challenge to sarcasm detection. In this paper, we propose a Dual-Channel Framework by modeling both literal and implied sentiments separately. Based on this dual-channel framework, we design the Dual-Channel Network (DC-Net) to recognize sentiment conflict. Experiments on political debates (i.e. IAC-V1 and IAC-V2) and Twitter datasets show that our proposed DC-Net achieves state-of-the-art performance on sarcasm recognition. Our code is released to support research.
null
null
10.18653/v1/2022.findings-naacl.126
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,383
inproceedings
bhargav-etal-2022-zero
Zero-shot Entity Linking with Less Data
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.127/
Bhargav, G P Shrivatsa and Khandelwal, Dinesh and Dana, Saswati and Garg, Dinesh and Kapanipathi, Pavan and Roukos, Salim and Gray, Alexander and Subramaniam, L Venkata
Findings of the Association for Computational Linguistics: NAACL 2022
1681--1697
Entity Linking (EL) maps an entity mention in a natural language sentence to an entity in a knowledge base (KB). The Zero-shot Entity Linking (ZEL) extends the scope of EL to unseen entities at the test time without requiring new labeled data. BLINK (BERT-based) is one of the SOTA models for ZEL. Interestingly, we discovered that BLINK exhibits diminishing returns, i.e., it reaches 98{\%} of its performance with just 1{\%} of the training data and the remaining 99{\%} of the data yields only a marginal increase of 2{\%} in the performance. While this extra 2{\%} gain makes a huge difference for downstream tasks, training BLINK on large amounts of data is very resource-intensive and impractical. In this paper, we propose a neuro-symbolic, multi-task learning approach to bridge this gap. Our approach boosts the BLINK`s performance with much less data by exploiting an auxiliary information about entity types. Specifically, we train our model on two tasks simultaneously - entity linking (primary task) and hierarchical entity type prediction (auxiliary task). The auxiliary task exploits the hierarchical structure of entity types. Our approach achieves superior performance on ZEL task with significantly less training data. On four different benchmark datasets, we show that our approach achieves significantly higher performance than SOTA models when they are trained with just 0.01{\%}, 0.1{\%}, or 1{\%} of the original training data. Our code is available at \url{https://github.com/IBM/NeSLET}.
null
null
10.18653/v1/2022.findings-naacl.127
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,384
inproceedings
wang-etal-2022-graphcache
{G}raph{C}ache: Message Passing as Caching for Sentence-Level Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.128/
Wang, Yiwei and Chen, Muhao and Zhou, Wenxuan and Cai, Yujun and Liang, Yuxuan and Hooi, Bryan
Findings of the Association for Computational Linguistics: NAACL 2022
1698--1708
Entity types and textual context are essential properties for sentence-level relation extraction (RE). Existing work only encodes these properties within individual instances, which limits the performance of RE given the insufficient features in a single sentence. In contrast, we model these properties from the whole dataset and use the dataset-level information to enrich the semantics of every instance. We propose the GraphCache (Graph Neural Network as Caching) module, that propagates the features across sentences to learn better representations for RE. GraphCache aggregates the features from sentences in the whole dataset to learn global representations of properties, and use them to augment the local features within individual sentences. The global property features act as dataset-level prior knowledge for RE, and a complement to the sentence-level features. Inspired by the classical caching technique in computer systems, we develop GraphCache to update the property representations in an online manner. Overall, GraphCache yields significant effectiveness gains on RE and enables efficient message passing across all sentences in the dataset.
null
null
10.18653/v1/2022.findings-naacl.128
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,385
inproceedings
zhao-etal-2022-revisiting
Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.129/
Zhao, Chao and Brahman, Faeze and Huang, Tenghao and Chaturvedi, Snigdha
Findings of the Association for Computational Linguistics: NAACL 2022
1709--1718
Pre-trained models (PTMs) have lead to great improvements in natural language generation (NLG). However, it is still unclear how much commonsense knowledge they possess. With the goal of evaluating commonsense knowledge of NLG models, recent work has proposed the problem of generative commonsense reasoning, e.g., to compose a logical sentence given a set of unordered concepts. Existing approaches to this problem hypothesize that PTMs lack sufficient parametric knowledge for this task, which can be overcome by introducing external knowledge or task-specific pre-training objectives. Different from this trend, we argue that PTM`s inherent ability for generative commonsense reasoning is underestimated due to the order-agnostic property of its input. In particular, we hypothesize that the order of the input concepts can affect the PTM`s ability to utilize its commonsense knowledge. To this end, we propose a pre-ordering approach to elaborately manipulate the order of the given concepts before generation. Experiments show that our approach can outperform the more sophisticated models that have access to a lot of external data and resources.
null
null
10.18653/v1/2022.findings-naacl.129
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,386
inproceedings
wang-etal-2022-identifying
Identifying and Mitigating Spurious Correlations for Improving Robustness in {NLP} Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.130/
Wang, Tianlu and Sridhar, Rohit and Yang, Diyi and Wang, Xuezhi
Findings of the Association for Computational Linguistics: NAACL 2022
1719--1729
Recently, NLP models have achieved remarkable progress across a variety of tasks; however, they have also been criticized for being not robust. Many robustness problems can be attributed to models exploiting {\textquotedblleft}spurious correlations{\textquotedblright}, or {\textquotedblleft}shortcuts{\textquotedblright} between the training data and the task labels. Most existing work identifies a limited set of task-specific shortcuts via human priors or error analyses, which requires extensive expertise and efforts. In this paper, we aim to automatically identify such spurious correlations in NLP models at scale. We first leverage existing interpretability methods to extract tokens that significantly affect model`s decision process from the input text. We then distinguish {\textquotedblleft}genuine{\textquotedblright} tokens and {\textquotedblleft}spurious{\textquotedblright} tokens by analyzing model predictions across multiple corpora and further verify them through knowledge-aware perturbations. We show that our proposed method can effectively and efficiently identify a scalable set of {\textquotedblleft}shortcuts{\textquotedblright}, and mitigating these leads to more robust models in multiple applications.
null
null
10.18653/v1/2022.findings-naacl.130
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,387
inproceedings
jiang-etal-2022-great
$Great~Truths~are ~Always ~Simple:$ A Rather Simple Knowledge Encoder for Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.131/
Jiang, Jinhao and Zhou, Kun and Wen, Ji-Rong and Zhao, Xin
Findings of the Association for Computational Linguistics: NAACL 2022
1730--1741
Commonsense reasoning in natural language is a desired ability of artificial intelligent systems. For solving complex commonsense reasoning tasks, a typical solution is to enhance pre-trained language models (PTMs) with a knowledge-aware graph neural network (GNN) encoder that models a commonsense knowledge graph (CSKG).Despite the effectiveness, these approaches are built on heavy architectures, and can`t clearly explain how external knowledge resources improve the reasoning capacity of PTMs. Considering this issue, we conduct a deep empirical analysis, and find that it is indeed \textit{relation features} from CSKGs (but not \textit{node features}) that mainly contribute to the performance improvement of PTMs. Based on this finding, we design a simple MLP-based knowledge encoder that utilizes statistical relation paths as features. Extensive experiments conducted on five benchmarks demonstrate the effectiveness of our approach, which also largely reduces the parameters for encoding CSKGs.Our codes and data are publicly available at \url{https://github.com/RUCAIBox/SAFE}.
null
null
10.18653/v1/2022.findings-naacl.131
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,388
inproceedings
fang-etal-2022-analyzing
Analyzing the Intensity of Complaints on Social Media
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.132/
Fang, Ming and Zong, Shi and Li, Jing and Dai, Xinyu and Huang, Shujian and Chen, Jiajun
Findings of the Association for Computational Linguistics: NAACL 2022
1742--1754
Complaining is a speech act that expresses a negative inconsistency between reality and human`s expectations. While prior studies mostly focus on identifying the existence or the type of complaints, in this work, we present the first study in computational linguistics of measuring the intensity of complaints from text. Analyzing complaints from such perspective is particularly useful, as complaints of certain degrees may cause severe consequences for companies or organizations. We first collect 3,103 posts about complaints in education domain from Weibo, a popular Chinese social media platform. These posts are then annotated with complaints intensity scores using Best-Worst Scaling (BWS) method. We show that complaints intensity can be accurately estimated by computational models with best mean square error achieving 0.11. Furthermore, we conduct a comprehensive linguistic analysis around complaints, including the connections between complaints and sentiment, and a cross-lingual comparison for complaints expressions used by Chinese and English speakers. We finally show that our complaints intensity scores can be incorporated for better estimating the popularity of posts on social media.
null
null
10.18653/v1/2022.findings-naacl.132
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,389
inproceedings
levi-etal-2022-detecting
Detecting Narrative Elements in Informational Text
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.133/
Levi, Effi and Mor, Guy and Sheafer, Tamir and Shenhav, Shaul
Findings of the Association for Computational Linguistics: NAACL 2022
1755--1765
Automatic extraction of narrative elements from text, combining narrative theories with computational models, has been receiving increasing attention over the last few years. Previous works have utilized the oral narrative theory by Labov and Waletzky to identify various narrative elements in personal stories texts. Instead, we direct our focus to informational texts, specifically news stories. We introduce NEAT (Narrative Elements AnnoTation) {--} a novel NLP task for detecting narrative elements in raw text. For this purpose, we designed a new multi-label narrative annotation scheme, better suited for informational text (e.g. news media), by adapting elements from the narrative theory of Labov and Waletzky (Complication and Resolution) and adding a new narrative element of our own (Success). We then used this scheme to annotate a new dataset of 2,209 sentences, compiled from 46 news articles from various category domains. We trained a number of supervised models in several different setups over the annotated dataset to identify the different narrative elements, achieving an average $F_1$ score of up to 0.77. The results demonstrate the holistic nature of our annotation scheme as well as its robustness to domain category.
null
null
10.18653/v1/2022.findings-naacl.133
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,390
inproceedings
mao-etal-2022-contrastive
When do Contrastive Word Alignments Improve Many-to-many Neural Machine Translation?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.134/
Mao, Zhuoyuan and Chu, Chenhui and Dabre, Raj and Song, Haiyue and Wan, Zhen and Kurohashi, Sadao
Findings of the Association for Computational Linguistics: NAACL 2022
1766--1775
Word alignment has proven to benefit many-to-many neural machine translation (NMT). However, high-quality ground-truth bilingual dictionaries were used for pre-editing in previous methods, which are unavailable for most language pairs. Meanwhile, the contrastive objective can implicitly utilize automatically learned word alignment, which has not been explored in many-to-many NMT. This work proposes a word-level contrastive objective to leverage word alignments for many-to-many NMT. Empirical results show that this leads to 0.8 BLEU gains for several language pairs. Analyses reveal that in many-to-many NMT, the encoder`s sentence retrieval performance highly correlates with the translation quality, which explains when the proposed method impacts translation. This motivates future exploration for many-to-many NMT to improve the encoder`s sentence retrieval performance.
null
null
10.18653/v1/2022.findings-naacl.134
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,391
inproceedings
sun-etal-2022-minimally
Minimally-Supervised Relation Induction from Pre-trained Language Model
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.135/
Sun, Lu and Shen, Yongliang and Lu, Weiming
Findings of the Association for Computational Linguistics: NAACL 2022
1776--1786
Relation Induction is a very practical task in Natural Language Processing (NLP) area. In practical application scenarios, people want to induce more entity pairs having the same relation from only a few seed entity pairs. Thus, instead of the laborious supervised setting, in this paper, we focus on the minimally-supervised setting where only a couple of seed entity pairs per relation are provided. Although the conventional relation induction methods have made some success, their performance depends heavily on the quality of word embeddings. The great success of Pre-trained Language Models, such as BERT, changes the NLP area a lot, and they are proven to be able to better capture relation knowledge. In this paper, we propose a novel method to induce relation with BERT under the minimally-supervised setting. Specifically, we firstly extract proper templates from the corpus by using the mask-prediction task in BERT to build pseudo-sentences as the context of entity pairs. Then we use BERT attention weights to better represent the pseudo-sentences. In addition, We also use the IntegratedGradient of entity pairs to iteratively select better templates further. Finally, with the high-quality pseudo-sentences, we can train a better classifier for relation induction. Experiments onGoogle Analogy Test Sets (GATS), Bigger Analogy TestSet (BATS) and DiffVec demonstrate that our proposed method achieves state-of-the-art performance.
null
null
10.18653/v1/2022.findings-naacl.135
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,392
inproceedings
zhang-etal-2022-crake
Crake: Causal-Enhanced Table-Filler for Question Answering over Large Scale Knowledge Base
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.136/
Zhang, Minhao and Zhang, Ruoyu and Li, Yanzeng and Zou, Lei
Findings of the Association for Computational Linguistics: NAACL 2022
1787--1798
Semantic parsing solves knowledge base (KB) question answering (KBQA) by composing a KB query, which generally involves node extraction (NE) and graph composition (GC) to detect and connect related nodes in a query. Despite the strong causal effects between NE and GC, previous works fail to directly model such causalities in their pipeline, hindering the learning of subtask correlations. Also, the sequence-generation process for GC in previous works induces ambiguity and exposure bias, which further harms accuracy. In this work, we formalize semantic parsing into two stages. In the first stage (graph structure generation), we propose a causal-enhanced table-filler to overcome the issues in sequence-modelling and to learn the internal causalities. In the second stage (relation extraction), an efficient beam-search algorithm is presented to scale complex queries on large-scale KBs. Experiments on LC-QuAD 1.0 indicate that our method surpasses previous state-of-the-arts by a large margin (17{\%}) while remaining time and space efficiency.
null
null
10.18653/v1/2022.findings-naacl.136
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,393
inproceedings
xu-etal-2022-exploring
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.137/
Xu, Lei and Chen, Yangyi and Cui, Ganqu and Gao, Hongcheng and Liu, Zhiyuan
Findings of the Association for Computational Linguistics: NAACL 2022
1799--1810
Prompt-based learning paradigm bridges the gap between pre-training and fine-tuning, and works effectively under the few-shot setting. However, we find that this learning paradigm inherits the vulnerability from the pre-training stage, where model predictions can be misled by inserting certain triggers into the text. In this paper, we explore this universal vulnerability by either injecting backdoor triggers or searching for adversarial triggers on pre-trained language models using only plain text. In both scenarios, we demonstrate that our triggers can totally control or severely decrease the performance of prompt-based models fine-tuned on arbitrary downstream tasks, reflecting the universal vulnerability of the prompt-based learning paradigm. Further experiments show that adversarial triggers have good transferability among language models. We also find conventional fine-tuning models are not vulnerable to adversarial triggers constructed from pre-trained language models. We conclude by proposing a potential solution to mitigate our attack methods. Code and data are publicly available.
null
null
10.18653/v1/2022.findings-naacl.137
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,394
inproceedings
kim-etal-2022-exploiting
Exploiting Numerical-Contextual Knowledge to Improve Numerical Reasoning in Question Answering
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.138/
Kim, Jeonghwan and Kang, Junmo and Kim, Kyung-min and Hong, Giwon and Myaeng, Sung-Hyon
Findings of the Association for Computational Linguistics: NAACL 2022
1811--1821
Numerical reasoning over text is a challenging subtask in question answering (QA) that requires both the understanding of texts and numbers. However, existing language models in these numerical reasoning QA models tend to overly rely on the pre-existing parametric knowledge at inference time, which commonly causes hallucination in interpreting numbers. Our work proposes a novel attention masked reasoning model, the NC-BERT, that learns to leverage the number-related contextual knowledge to alleviate the over-reliance on parametric knowledge and enhance the numerical reasoning capabilities of the QA model. The empirical results suggest that understanding of numbers in their context by reducing the parametric knowledge influence, and refining numerical information in the number embeddings lead to improved numerical reasoning accuracy and performance in DROP, a numerical QA dataset.
null
null
10.18653/v1/2022.findings-naacl.138
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,395
inproceedings
liu-etal-2022-learn
Learn from Relation Information: Towards Prototype Representation Rectification for Few-Shot Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.139/
Liu, Yang and Hu, Jinpeng and Wan, Xiang and Chang, Tsung-Hui
Findings of the Association for Computational Linguistics: NAACL 2022
1822--1831
Few-shot Relation Extraction refers to fast adaptation to novel relation classes with few samples through training on the known relation classes. Most existing methods focus on implicitly introducing relation information (i.e., relation label or relation description) to constrain the prototype representation learning, such as contrastive learning, graphs, and specifically designed attentions, which may bring useless and even harmful parameters. Besides, these approaches are limited in handing outlier samples far away from the class center due to the weakly implicit constraint. In this paper, we propose an effective and parameter-less Prototype Rectification Method (PRM) to promote few-shot relation extraction, where we utilize a prototype rectification module to rectify original prototypes explicitly by the relation information. Specifically, PRM is composed of two gate mechanisms. One gate decides how much of the original prototype remains, and another one updates the remained prototype with relation information. In doing so, better and stabler global relation information can be captured for guiding prototype representations, and thus PRM can robustly deal with outliers. Moreover, we also extend PRM to both none-of-the-above (NOTA) and domain adaptation scenarios. Experimental results on FewRel 1.0 and 2.0 datasets demonstrate the effectiveness of our proposed method, which achieves state-of-the-art performance.
null
null
10.18653/v1/2022.findings-naacl.139
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,396
inproceedings
yoo-etal-2022-hue
{HUE}: Pretrained Model and Dataset for Understanding Hanja Documents of {A}ncient {K}orea
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.140/
Yoo, Haneul and Jin, Jiho and Son, Juhee and Bak, JinYeong and Cho, Kyunghyun and Oh, Alice
Findings of the Association for Computational Linguistics: NAACL 2022
1832--1844
Historical records in Korea before the 20th century were primarily written in Hanja, an extinct language based on Chinese characters and not understood by modern Korean or Chinese speakers. Historians with expertise in this time period have been analyzing the documents, but that process is very difficult and time-consuming, and language models would significantly speed up the process. Toward building and evaluating language models for Hanja, we release the Hanja Understanding Evaluation dataset consisting of chronological attribution, topic classification, named entity recognition, and summary retrieval tasks. We also present BERT-based models continued training on the two major corpora from the 14th to the 19th centuries: the Annals of the Joseon Dynasty and Diaries of the Royal Secretariats. We compare the models with several baselines on all tasks and show there are significant improvements gained by training on the two corpora. Additionally, we run zero-shot experiments on the Daily Records of the Royal Court and Important Officials (DRRI). The DRRI dataset has not been studied much by the historians, and not at all by the NLP community.
null
null
10.18653/v1/2022.findings-naacl.140
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,397
inproceedings
xu-etal-2022-sead
{S}ea{D}: End-to-end Text-to-{SQL} Generation with Schema-aware Denoising
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.141/
Xu, Kuan and Wang, Yongbo and Wang, Yongliang and Wang, Zihao and Wen, Zujie and Dong, Yang
Findings of the Association for Computational Linguistics: NAACL 2022
1845--1853
On the WikiSQL benchmark, most methods tackle the challenge of text-to-SQL with predefined sketch slots and build sophisticated sub-tasks to fill these slots. Though achieving promising results, these methods suffer from over-complex model structure. In this paper, we present a simple yet effective approach that enables auto-regressive sequence-to-sequence model to robust text-to-SQL generation. Instead of formulating the task of text-to-SQL as slot-filling, we propose to train sequence-to-sequence model with Schema-aware Denoising (SeaD), which consists of two denoising objectives that train model to either recover input or predict output from two novel erosion and shuffle noises. These model-agnostic denoising objectives act as the auxiliary tasks for structural data modeling during sequence-to-sequence generation. In addition, we propose a clause-sensitive execution guided (EG) decoding strategy to overcome the limitation of EG decoding for generative model. The experiments show that the proposed method improves the performance of sequence-to-sequence model in both schema linking and grammar correctness and establishes new state-of-the-art on WikiSQL benchmark. Our work indicates that the capacity of sequence-to-sequence model for text-to-SQL may have been under-estimated and could be enhanced by specialized denoising task.
null
null
10.18653/v1/2022.findings-naacl.141
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,398
inproceedings
liu-etal-2022-cross-lingual
Cross-Lingual Cross-Modal Consolidation for Effective Multilingual Video Corpus Moment Retrieval
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.142/
Liu, Jiaheng and Yu, Tan and Peng, Hanyu and Sun, Mingming and Li, Ping
Findings of the Association for Computational Linguistics: NAACL 2022
1854--1862
Existing multilingual video corpus moment retrieval (mVCMR) methods are mainly based on a two-stream structure. The visual stream utilizes the visual content in the video to estimate the query-visual similarity, and the subtitle stream exploits the query-subtitle similarity. The final query-video similarity ensembles similarities from two streams. In our work, we pro- pose a simple and effective strategy termed as Cross-lingual Cross-modal Consolidation (C3 ) to improve mVCMR accuracy. We adopt the ensemble similarity as the teacher to guide the training of each stream, leading to a more powerful ensemble similarity. Meanwhile, we use the teacher for a specific language to guide the student for another language to exploit the complementary knowledge across languages. Ex- tensive experiments on mTVR dataset demonstrate the effectiveness of our C3 method.
null
null
10.18653/v1/2022.findings-naacl.142
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,399
inproceedings
gu-etal-2022-delving
Delving Deep into Regularity: A Simple but Effective Method for {C}hinese Named Entity Recognition
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.143/
Gu, Yingjie and Qu, Xiaoye and Wang, Zhefeng and Zheng, Yi and Huai, Baoxing and Yuan, Nicholas Jing
Findings of the Association for Computational Linguistics: NAACL 2022
1863--1873
Recent years have witnessed the improving performance of Chinese Named Entity Recognition (NER) from proposing new frameworks or incorporating word lexicons. However, the inner composition of entity mentions in character-level Chinese NER has been rarely studied. Actually, most mentions of regular types have strong name regularity. For example, entities end with indicator words such as {\textquotedblleft}公司 (company) {\textquotedblright} or {\textquotedblleft}银行 (bank){\textquotedblright} usually belong to organization. In this paper, we propose a simple but effective method for investigating the regularity of entity spans in Chinese NER, dubbed as Regularity-Inspired reCOgnition Network (RICON). Specifically, the proposed model consists of two branches: a regularity-aware module and a regularity-agnostic module. The regularity-aware module captures the internal regularity of each span for better entity type prediction, while the regularity-agnostic module is employed to locate the boundary of entities and relieve the excessive attention to span regularity. An orthogonality space is further constructed to encourage two modules to extract different aspects of regularity features. To verify the effectiveness of our method, we conduct extensive experiments on three benchmark datasets and a practical medical dataset. The experimental results show that our RICON significantly outperforms previous state-of-the-art methods, including various lexicon-based methods.
null
null
10.18653/v1/2022.findings-naacl.143
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,400
inproceedings
chakraborty-etal-2022-crush
{CRUSH}: Contextually Regularized and User anchored Self-supervised Hate speech Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.144/
Chakraborty, Souvic and Dutta, Parag and Roychowdhury, Sumegh and Mukherjee, Animesh
Findings of the Association for Computational Linguistics: NAACL 2022
1874--1886
The last decade has witnessed a surge in the interaction of people through social networking platforms. While there are several positive aspects of these social platforms, their proliferation has led them to become the breeding ground for cyber-bullying and hate speech. Recent advances in NLP have often been used to mitigate the spread of such hateful content. Since the task of hate speech detection is usually applicable in the context of social networks, we introduce CRUSH, a framework for hate speech detection using User Anchored self-supervision and contextual regularization. Our proposed approach secures {\textasciitilde}1-12{\%} improvement in test set metrics over best performing previous approaches on two types of tasks and multiple popular English language social networking datasets.
null
null
10.18653/v1/2022.findings-naacl.144
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,401
inproceedings
hong-etal-2022-metgen
{METGEN}: A Module-Based Entailment Tree Generation Framework for Answer Explanation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.145/
Hong, Ruixin and Zhang, Hongming and Yu, Xintong and Zhang, Changshui
Findings of the Association for Computational Linguistics: NAACL 2022
1887--1905
Knowing the reasoning chains from knowledge to the predicted answers can help construct an explainable question answering (QA) system. Advances on QA explanation propose to explain the answers with entailment trees composed of multiple entailment steps. While current work proposes to generate entailment trees with end-to-end generative models, the steps in the generated trees are not constrained and could be unreliable. In this paper, we propose METGEN, a Module-based Entailment Tree GENeration framework that has multiple modules and a reasoning controller. Given a question and several supporting knowledge, METGEN can iteratively generate the entailment tree by conducting single-step entailment with separate modules and selecting the reasoning flow with the controller. As each module is guided to perform a specific type of entailment reasoning, the steps generated by METGEN are more reliable and valid. Experiment results on the standard benchmark show that METGEN can outperform previous state-of-the-art models with only 9{\%} of the parameters.
null
null
10.18653/v1/2022.findings-naacl.145
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,402
inproceedings
wang-etal-2022-timestep
A Timestep aware Sentence Embedding and Acme Coverage for Brief but Informative Title Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.146/
Wang, Quanbin and Lin, XieXiong and Wang, Feng
Findings of the Association for Computational Linguistics: NAACL 2022
1906--1918
The title generation task that summarizes article content in recapitulatory words relies heavily on utilizing the corresponding key context. To generate a title with appropriate information in the content and avoid repetition, we propose a title generation framework with two complementary components in this paper. First, we propose a Timestep aware Sentence Embedding (TSE) mechanism, which updates the sentences' representations by re-locating the critical words in the corresponding sentence for each decoding step. Then, we present an Acme Coverage (AC) mechanism to solve the repetition problem and preserve the remaining valuable keywords after each decoding step according to the final vocabulary distribution. We conduct comprehensive experiments on various title generation tasks with different backbones, the evaluation scores of ROUGE and METEOR in varying degrees are significantly outperforming most of the existing state-of-the-art approaches. The experimental results demonstrate the effectiveness and generality of our novel generation framework TSE-AC.
null
null
10.18653/v1/2022.findings-naacl.146
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,403
inproceedings
nguyen-etal-2022-make
Make The Most of Prior Data: A Solution for Interactive Text Summarization with Preference Feedback
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.147/
Nguyen, Duy-Hung and Nghiem, Nguyen Viet Dung and Nguyen, Bao-Sinh and Tien Le, Dung Tien and Sabahi, Shahab and Nguyen, Minh-Tien and Le, Hung
Findings of the Association for Computational Linguistics: NAACL 2022
1919--1930
For summarization, human preferences is critical to tame outputs of the summarizer in favor of human interests, as ground-truth summaries are scarce and ambiguous. Practical settings require dynamic exchanges between humans and AI agents wherein feedback is provided in an online manner, a few at a time. In this paper, we introduce a new framework to train summarization models with preference feedback interactively. By properly leveraging offline data and a novel reward model, we improve the performance regarding ROUGE scores and sample-efficiency. Our experiments on three various datasets confirm the benefit of the proposed framework in active, few-shot and online settings of preference learning.
null
null
10.18653/v1/2022.findings-naacl.147
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,404
inproceedings
cao-etal-2022-xltime
{XLT}ime: A Cross-Lingual Knowledge Transfer Framework for Temporal Expression Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.148/
Cao, Yuwei and Groves, William and Saha, Tanay Kumar and Tetreault, Joel and Jaimes, Alejandro and Peng, Hao and Yu, Philip
Findings of the Association for Computational Linguistics: NAACL 2022
1931--1942
Temporal Expression Extraction (TEE) is essential for understanding time in natural language. It has applications in Natural Language Processing (NLP) tasks such as question answering, information retrieval, and causal inference. To date, work in this area has mostly focused on English as there is a scarcity of labeled data for other languages. We propose XLTime, a novel framework for multilingual TEE. XLTime works on top of pre-trained language models and leverages multi-task learning to prompt cross-language knowledge transfer both from English and within the non-English languages. XLTime alleviates problems caused by a shortage of data in the target language. We apply XLTime with different language models and show that it outperforms the previous automatic SOTA methods on French, Spanish, Portuguese, and Basque, by large margins. XLTime also closes the gap considerably on the handcrafted HeidelTime method.
null
null
10.18653/v1/2022.findings-naacl.148
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,405
inproceedings
lai-etal-2022-behancepr
{B}ehance{PR}: A Punctuation Restoration Dataset for Livestreaming Video Transcript
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.149/
Lai, Viet and Pouran Ben Veyseh, Amir and Dernoncourt, Franck and Nguyen, Thien
Findings of the Association for Computational Linguistics: NAACL 2022
1943--1951
Given the increasing number of livestreaming videos, automatic speech recognition and post-processing for livestreaming video transcripts are crucial for efficient data management as well as knowledge mining. A key step in this process is punctuation restoration which restores fundamental text structures such as phrase and sentence boundaries from the video transcripts. This work presents a new human-annotated corpus, called BehancePR, for punctuation restoration in livestreaming video transcripts. Our experiments on BehancePR demonstrate the challenges of punctuation restoration for this domain. Furthermore, we show that popular natural language processing toolkits like Stanford Stanza, Spacy, and Trankit underperform on detecting sentence boundary on non-punctuated transcripts of livestreaming videos. The dataset is publicly accessible at \url{http://github.com/nlp-uoregon/behancepr}.
null
null
10.18653/v1/2022.findings-naacl.149
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,406
inproceedings
guzman-nateras-etal-2022-event
Event Detection for Suicide Understanding
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.150/
Guzman-Nateras, Luis and Lai, Viet and Pouran Ben Veyseh, Amir and Dernoncourt, Franck and Nguyen, Thien
Findings of the Association for Computational Linguistics: NAACL 2022
1952--1961
Suicide is a serious problem in every society. Understanding life events of a potential patient is essential for successful suicide-risk assessment and prevention. In this work, we focus on the Event Detection (ED) task to identify event trigger words of suicide-related events in public posts of discussion forums. In particular, we introduce SuicideED: a new dataset for the ED task that features seven suicidal event types to comprehensively capture suicide actions and ideation, and general risk and protective factors. Our experiments with current state-of-the-art ED systems suggest that this domain poses meaningful challenges as there is significant room for improvement of ED models. We will release SuicideED to support future research in this important area.
null
null
10.18653/v1/2022.findings-naacl.150
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,407
inproceedings
mcdonald-etal-2022-great
Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.151/
McDonald, Joseph and Li, Baolin and Frey, Nathan and Tiwari, Devesh and Gadepally, Vijay and Samsi, Siddharth
Findings of the Association for Computational Linguistics: NAACL 2022
1962--1970
The energy requirements of current natural language processing models continue to grow at a rapid, unsustainable pace. Recent works highlighting this problem conclude there is an urgent need for methods that reduce the energy needs of NLP and machine learning more broadly. In this article, we investigate techniques that can be used to reduce the energy consumption of common NLP applications. In particular, we focus on techniques to measure energy usage and different hardware and datacenter-oriented settings that can be tuned to reduce energy consumption for training and inference for language models. We characterize the impact of these settings on metrics such as computational performance and energy consumption through experiments conducted on a high performance computing system as well as popular cloud computing platforms. These techniques can lead to significant reduction in energy consumption when training language models or their use for inference. For example, power-capping, which limits the maximum power a GPU can consume, can enable a 15{\%} decrease in energy usage with marginal increase in overall computation time when training a transformer-based language model.
null
null
10.18653/v1/2022.findings-naacl.151
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,408
inproceedings
sanchez-etal-2022-kinds
What kinds of errors do reference resolution models make and what can we learn from them?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.152/
S{\'a}nchez, Jorge and Mazuecos, Mauricio and Maina, Hern{\'a}n and Benotti, Luciana
Findings of the Association for Computational Linguistics: NAACL 2022
1971--1986
Referring resolution is the task of identifying the referent of a natural language expression, for example {\textquotedblleft}the woman behind the other woman getting a massage{\textquotedblright}. In this paper we investigate which are the kinds of referring expressions on which current transformer based models fail. Motivated by this analysis we identify the weakening of the spatial natural constraints as one of its causes and propose a model that aims to restore it. We evaluate our proposed model on different datasets for the task showing improved performance on the most challenging kinds of referring expressions. Finally we present a thorough analysis of the kinds errors that are improved by the new model and those that are not and remain future challenges for the task.
null
null
10.18653/v1/2022.findings-naacl.152
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,409
inproceedings
lei-etal-2022-uncertainty
Uncertainty-Aware Cross-Lingual Transfer with Pseudo Partial Labels
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.153/
Lei, Shuo and Zhang, Xuchao and He, Jianfeng and Chen, Fanglan and Lu, Chang-Tien
Findings of the Association for Computational Linguistics: NAACL 2022
1987--1997
Large-scale multilingual pre-trained language models have achieved remarkable performance in zero-shot cross-lingual tasks. A recent study has demonstrated the effectiveness of self-learning-based approach on cross-lingual transfer, where only unlabeled data of target languages are required, without any efforts to annotate gold labels for target languages. However, it suffers from noisy training due to the incorrectly pseudo-labeled samples. In this work, we propose an uncertainty-aware Cross-Lingual Transfer framework with Pseudo-Partial-Label (CLTP)1 to maximize the utilization of unlabeled data by reducing the noise introduced in the training phase. To estimate pseudo-partial-label for each unlabeled data, we propose a novel estimation method, considering both prediction confidence and the limitation to the number of similar labels. Extensive experiments are conducted on two cross-lingual tasks, including Named Entity Recognition (NER) and Natural Language Inference (NLI) across 40 languages, which shows our method can outperform the baselines on both high-resource and low-resource languages, such as 6.9 on Kazakh (kk) and 5.2 Marathi (mr) for NER.
null
null
10.18653/v1/2022.findings-naacl.153
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,410
inproceedings
casanueva-etal-2022-nlu
{NLU}++: A Multi-Label, Slot-Rich, Generalisable Dataset for Natural Language Understanding in Task-Oriented Dialogue
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.154/
Casanueva, Inigo and Vuli{\'c}, Ivan and Spithourakis, Georgios and Budzianowski, Pawe{\l}
Findings of the Association for Computational Linguistics: NAACL 2022
1998--2013
We present NLU++, a novel dataset for natural language understanding (NLU) in task-oriented dialogue (ToD) systems, with the aim to provide a much more challenging evaluation environment for dialogue NLU models, up to date with the current application and industry requirements. NLU++ is divided into two domains (BANKING and HOTELS) and brings several crucial improvements over current commonly used NLU datasets. 1) NLU++ provides fine-grained domain ontologies with a large set of challenging multi-intent sentences combined with finer-grained and thus more challenging slot sets. 2) The ontology is divided into domain-specific and generic (i.e., domain-universal) intents that overlap across domains, promoting cross-domain reusability of annotated examples. 3) The dataset design has been inspired by the problems observed in industrial ToD systems, and 4) it has been collected, filtered and carefully annotated by dialogue NLU experts, yielding high-quality annotated data. Finally, we benchmark a series of current state-of-the-art NLU models on NLU++; the results demonstrate the challenging nature of the dataset, especially in low-data regimes, and call for further research on ToD NLU.
null
null
10.18653/v1/2022.findings-naacl.154
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,411
inproceedings
liu-etal-2022-challenges
Challenges in Generalization in Open Domain Question Answering
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.155/
Liu, Linqing and Lewis, Patrick and Riedel, Sebastian and Stenetorp, Pontus
Findings of the Association for Computational Linguistics: NAACL 2022
2014--2029
Recent work on Open Domain Question Answering has shown that there is a large discrepancy in model performance between novel test questions and those that largely overlap with training questions. However, it is unclear which aspects of novel questions make them challenging. Drawing upon studies on systematic generalization, we introduce and annotate questions according to three categories that measure different levels and kinds of generalization: training set overlap, compositional generalization (comp-gen), and novel-entity generalization (novel-entity). When evaluating six popular parametric and non-parametric models, we find that for the established Natural Questions and TriviaQA datasets, even the strongest model performance for comp-gen/novel-entity is 13.1/5.4{\%} and 9.6/1.5{\%} lower compared to that for the full test set {--} indicating the challenge posed by these types of questions. Furthermore, we show that whilst non-parametric models can handle questions containing novel entities relatively well, they struggle with those requiring compositional generalization. Lastly, we find that key question difficulty factors are: cascading errors from the retrieval component, frequency of question pattern, and frequency of the entity.
null
null
10.18653/v1/2022.findings-naacl.155
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,412
inproceedings
jang-etal-2022-beyond
Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.156/
Jang, Myeongjun and Mtumbuka, Frank and Lukasiewicz, Thomas
Findings of the Association for Computational Linguistics: NAACL 2022
2030--2042
The logical negation property (LNP), which implies generating different predictions for semantically opposite inputs (p is true iff {\textlnot}p is false), is an important property that a trustworthy language model must satisfy. However, much recent evidence shows that large-size pre-trained language models (PLMs) do not satisfy this property. In this paper, we perform experiments using probing tasks to assess PLMs' LNP understanding. Unlike previous studies that only examined negation expressions, we expand the boundary of the investigation to lexical semantics. Through experiments, we observe that PLMs violate the LNP frequently. To alleviate the issue, we propose a novel intermediate training task, named meaning-matching, designed to directly learn a meaning text correspondence, instead of relying on the distributional hypothesis. Through multiple experiments, we find that the task enables PLMs to learn lexical semantic information. Also, through fine-tuning experiments on 7 GLUE tasks, we confirm that it is a safe intermediate task that guarantees a similar or better performance of downstream tasks. Finally, we observe that our proposed approach outperforms our previous counterparts despite its time and resource efficiency.
null
null
10.18653/v1/2022.findings-naacl.156
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,413
inproceedings
xu-murray-2022-por
Por Qu{\'e} N{\~a}o Utiliser Alla Spr{\r{a}}k? Mixed Training with Gradient Optimization in Few-Shot Cross-Lingual Transfer
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.157/
Xu, Haoran and Murray, Kenton
Findings of the Association for Computational Linguistics: NAACL 2022
2043--2059
The current state-of-the-art for few-shot cross-lingual transfer learning first trains on abundant labeled data in the source language and then fine-tunes with a few examples on the target language, termed target-adapting. Though this has been demonstrated to work on a variety of tasks, in this paper we show some deficiencies of this approach and propose a one-step mixed training method that trains on both source and target data with stochastic gradient surgery, a novel gradient-level optimization. Unlike the previous studies that focus on one language at a time when target-adapting, we use one model to handle all target languages simultaneously to avoid excessively language-specific models. Moreover, we discuss the unreality of utilizing large target development sets for model selection in previous literature. We further show that our method is both development-free for target languages, and is also able to escape from overfitting issues. We conduct a large-scale experiment on 4 diverse NLP tasks across up to 48 languages. Our proposed method achieves state-of-the-art performance on all tasks and outperforms target-adapting by a large margin, especially for languages that are linguistically distant from the source language, e.g., 7.36{\%} F1 absolute gain on average for the NER task, up to 17.60{\%} on Punjabi.
null
null
10.18653/v1/2022.findings-naacl.157
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,414
inproceedings
shi-etal-2022-learning
Learning to Execute Actions or Ask Clarification Questions
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.158/
Shi, Zhengxiang and Feng, Yue and Lipani, Aldo
Findings of the Association for Computational Linguistics: NAACL 2022
2060--2070
Collaborative tasks are ubiquitous activities where a form of communication is required in order to reach a joint goal. Collaborative building is one of such tasks. We wish to develop an intelligent builder agent in a simulated building environment (Minecraft) that can build whatever users wish to build by just talking to the agent. In order to achieve this goal, such agents need to be able to take the initiative by asking clarification questions when further information is needed. Existing works on Minecraft Corpus Dataset only learn to execute instructions neglecting the importance of asking for clarifications. In this paper, we extend the Minecraft Corpus Dataset by annotating all builder utterances into eight types, including clarification questions, and propose a new builder agent model capable of determining when to ask or execute instructions. Experimental results show that our model achieves state-of-the-art performance on the collaborative building task with a substantial improvement. We also define two new tasks, the learning to ask task and the joint learning task. The latter consists of solving both collaborating building and learning to ask tasks jointly.
null
null
10.18653/v1/2022.findings-naacl.158
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,415
inproceedings
qian-etal-2022-capturing
Capturing Conversational Interaction for Question Answering via Global History Reasoning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.159/
Qian, Jin and Zou, Bowei and Dong, Mengxing and Li, Xiao and Aw, AiTi and Hong, Yu
Findings of the Association for Computational Linguistics: NAACL 2022
2071--2078
Conversational Question Answering (ConvQA) is required to answer the current question, conditioned on the observable paragraph-level context and conversation history. Previous works have intensively studied history-dependent reasoning. They perceive and absorb topic-related information of prior utterances in the interactive encoding stage. It yielded significant improvement compared to history-independent reasoning. This paper further strengthens the ConvQA encoder by establishing long-distance dependency among global utterances in multi-turn conversation. We use multi-layer transformers to resolve long-distance relationships, which potentially contribute to the reweighting of attentive information in historical utterances. Experiments on QuAC show that our method obtains a substantial improvement (1{\%}), yielding the F1 score of 73.7{\%}. All source codes are available at \url{https://github.com/jaytsien/GHR}.
null
null
10.18653/v1/2022.findings-naacl.159
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,416
inproceedings
yang-etal-2022-learning-structural
Learning Structural Information for Syntax-Controlled Paraphrase Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.160/
Yang, Erguang and Bai, Chenglin and Xiong, Deyi and Zhang, Yujie and Meng, Yao and Xu, Jinan and Chen, Yufeng
Findings of the Association for Computational Linguistics: NAACL 2022
2079--2090
Syntax-controlled paraphrase generation aims to produce paraphrase conform to given syntactic patterns. To address this task, recent works have started to use parse trees (or syntactic templates) to guide generation.A constituency parse tree contains abundant structural information, such as parent-child relation, sibling relation, and the alignment relation between words and nodes. Previous works have only utilized parent-child and alignment relations, which may affect the generation quality. To address this limitation, we propose a Structural Information-augmented Syntax-Controlled Paraphrasing (SI-SCP) model. Particularly, we design a syntax encoder based on tree-transformer to capture parent-child and sibling relations. To model the alignment relation between words and nodes, we propose an attention regularization objective, which makes the decoder accurately select corresponding syntax nodes to guide the generation of words. Experiments show that SI-SCP achieves state-of-the-art performances in terms of semantic and syntactic quality on two popular benchmark datasets. Additionally, we propose a Syntactic Template Retriever (STR) to retrieve compatible syntactic structures. We validate that STR is capable of retrieving compatible syntactic structures. We further demonstrate the effectiveness of SI-SCP to generate diverse paraphrases with retrieved syntactic structures.
null
null
10.18653/v1/2022.findings-naacl.160
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,417
inproceedings
cripwell-etal-2022-controllable
Controllable Sentence Simplification via Operation Classification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.161/
Cripwell, Liam and Legrand, Jo{\"el and Gardent, Claire
Findings of the Association for Computational Linguistics: NAACL 2022
2091--2103
Different types of transformations have been used to model sentence simplification ranging from mainly local operations such as phrasal or lexical rewriting, deletion and re-ordering to the more global affecting the whole input sentence such as sentence rephrasing, copying and splitting. In this paper, we propose a novel approach to sentence simplification which encompasses four global operations: whether to rephrase or copy and whether to split based on syntactic or discourse structure. We create a novel dataset that can be used to train highly accurate classification systems for these four operations. We propose a controllable-simplification model that tailors simplifications to these operations and show that it outperforms both end-to-end, non-controllable approaches and previous controllable approaches.
null
null
10.18653/v1/2022.findings-naacl.161
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,418
inproceedings
xing-etal-2022-balancing
Balancing Multi-Domain Corpora Learning for Open-Domain Response Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.162/
Xing, Yujie and Cai, Jinglun and Barlaug, Nils and Liu, Peng and Gulla, Jon Atle
Findings of the Association for Computational Linguistics: NAACL 2022
2104--2120
Open-domain conversational systems are assumed to generate equally good responses on multiple domains. Previous work achieved good performance on the single corpus, but training and evaluating on multiple corpora from different domains are less studied. This paper explores methods of generating relevant responses for each of multiple multi-domain corpora. We first examine interleaved learning which intermingles multiple corpora as the baseline. We then investigate two multi-domain learning methods, labeled learning and multi-task labeled learning, which encode each corpus through a unique corpus embedding. Furthermore, we propose Domain-specific Frequency (DF), a novel word-level importance weight that measures the relative importance of a word for a specific corpus compared to other corpora. Based on DF, we propose weighted learning, a method that integrates DF to the loss function. We also adopt DF as a new evaluation metric. Extensive experiments show that our methods gain significant improvements on both automatic and human evaluation. We share our code and data for reproducibility.
null
null
10.18653/v1/2022.findings-naacl.162
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,419
inproceedings
sheng-etal-2022-semantic
Semantic-Preserving Abstractive Text Summarization with {S}iamese Generative Adversarial Net
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.163/
Sheng, Xin and Xu, Linli and Xu, Yinlong and Jiang, Deqiang and Ren, Bo
Findings of the Association for Computational Linguistics: NAACL 2022
2121--2132
We propose a novel siamese generative adversarial net for abstractive text summarization (SSPGAN), which can preserve the main semantics of the source text. Different from previous generative adversarial net based methods, SSPGAN is equipped with a siamese semantic-preserving discriminator, which can not only be trained to discriminate the machine-generated summaries from the human-summarized ones, but also ensure the semantic consistency between the source text and target summary. As a consequence of the min-max game between the generator and the siamese semantic-preserving discriminator, the generator can generate a summary that conveys the key content of the source text more accurately. Extensive experiments on several text summarization benchmarks in different languages demonstrate that the proposed model can achieve significant improvements over the state-of-the-art methods.
null
null
10.18653/v1/2022.findings-naacl.163
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,420
inproceedings
zhu-hudelot-2022-towards
Towards Job-Transition-Tag Graph for a Better Job Title Representation Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.164/
Zhu, Jun and Hudelot, Celine
Findings of the Association for Computational Linguistics: NAACL 2022
2133--2140
Works on learning job title representation are mainly based on \textit{Job-Transition Graph}, built from the working history of talents. However, since these records are usually messy, this graph is very sparse, which affects the quality of the learned representation and hinders further analysis. To address this specific issue, we propose to enrich the graph with additional nodes that improve the quality of job title representation. Specifically, we construct \textit{Job-Transition-Tag Graph}, a heterogeneous graph containing two types of nodes, i.e., job titles and tags (i.e., words related to job responsibilities or functionalities). Along this line, we reformulate job title representation learning as the task of learning node embedding on the \textit{Job-Transition-Tag Graph}. Experiments on two datasets show the interest of our approach.
null
null
10.18653/v1/2022.findings-naacl.164
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,421
inproceedings
limkonchotiwat-etal-2022-cl
{CL}-{R}e{LKT}: Cross-lingual Language Knowledge Transfer for Multilingual Retrieval Question Answering
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.165/
Limkonchotiwat, Peerat and Ponwitayarat, Wuttikorn and Udomcharoenchaikit, Can and Chuangsuwanich, Ekapol and Nutanong, Sarana
Findings of the Association for Computational Linguistics: NAACL 2022
2141--2155
Cross-Lingual Retrieval Question Answering (CL-ReQA) is concerned with retrieving answer documents or passages to a question written in a different language. A common approach to CL-ReQA is to create a multilingual sentence embedding space such that question-answer pairs across different languages are close to each other. In this paper, we propose a novel CL-ReQA method utilizing the concept of language knowledge transfer and a new cross-lingual consistency training technique to create a multilingual embedding space for ReQA. To assess the effectiveness of our work, we conducted comprehensive experiments on CL-ReQA and a downstream task, machine reading QA. We compared our proposed method with the current state-of-the-art solutions across three public CL-ReQA corpora. Our method outperforms competitors in 19 out of 21 settings of CL-ReQA. When used with a downstream machine reading QA task, our method outperforms the best existing language-model-based method by 10{\%} in F1 while being 10 times faster in sentence embedding computation. The code and models are available at \url{https://github.com/mrpeerat/CL-ReLKT}.
null
null
10.18653/v1/2022.findings-naacl.165
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,422
inproceedings
sun-etal-2022-bort
{BORT}: Back and Denoising Reconstruction for End-to-End Task-Oriented Dialog
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.166/
Sun, Haipeng and Bao, Junwei and Wu, Youzheng and He, Xiaodong
Findings of the Association for Computational Linguistics: NAACL 2022
2156--2170
A typical end-to-end task-oriented dialog system transfers context into dialog state, and upon which generates a response, which usually faces the problem of error propagation from both previously generated inaccurate dialog states and responses, especially in low-resource scenarios. To alleviate these issues, we propose BORT, a back and denoising reconstruction approach for end-to-end task-oriented dialog system. Squarely, to improve the accuracy of dialog states, back reconstruction is used to reconstruct the original input context from the generated dialog states since inaccurate dialog states cannot recover the corresponding input context. To enhance the denoising capability of the model to reduce the impact of error propagation, denoising reconstruction is used to reconstruct the corrupted dialog state and response. Extensive experiments conducted on MultiWOZ 2.0 and CamRest676 show the effectiveness of BORT. Furthermore, BORT demonstrates its advanced capabilities in the zero-shot domain and low-resource scenarios.
null
null
10.18653/v1/2022.findings-naacl.166
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,423
inproceedings
ding-etal-2022-multi
Multi-stage Distillation Framework for Cross-Lingual Semantic Similarity Matching
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.167/
Ding, Kunbo and Liu, Weijie and Fang, Yuejian and Zhao, Zhe and Ju, Qi and Yang, Xuefeng and Tian, Rong and Tao, Zhu and Liu, Haoyan and Guo, Han and Bai, Xingyu and Mao, Weiquan and Li, Yudong and Guo, Weigang and Wu, Taiqiang and Sun, Ningyuan
Findings of the Association for Computational Linguistics: NAACL 2022
2171--2181
Previous studies have proved that cross-lingual knowledge distillation can significantly improve the performance of pre-trained models for cross-lingual similarity matching tasks. However, the student model needs to be large in this operation. Otherwise, its performance will drop sharply, thus making it impractical to be deployed to memory-limited devices. To address this issue, we delve into cross-lingual knowledge distillation and propose a multi-stage distillation framework for constructing a small-size but high-performance cross-lingual model. In our framework, contrastive learning, bottleneck, and parameter recurrent strategies are delicately combined to prevent performance from being compromised during the compression process. The experimental results demonstrate that our method can compress the size of XLM-R and MiniLM by more than 50{\%}, while the performance is only reduced by about 1{\%}.
null
null
10.18653/v1/2022.findings-naacl.167
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,424
inproceedings
schwartz-stanovsky-2022-limitations
On the Limitations of Dataset Balancing: The Lost Battle Against Spurious Correlations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.168/
Schwartz, Roy and Stanovsky, Gabriel
Findings of the Association for Computational Linguistics: NAACL 2022
2182--2194
Recent work has shown that deep learning models in NLP are highly sensitive to low-level correlations between simple features and specific output labels, leading to over-fitting and lack of generalization. To mitigate this problem, a common practice is to balance datasets by adding new instances or by filtering out {\textquotedblleft}easy{\textquotedblright} instances (Sakaguchi et al., 2020), culminating in a recent proposal to eliminate single-word correlations altogether (Gardner et al., 2021). In this opinion paper, we identify that despite these efforts, increasingly-powerful models keep exploiting ever-smaller spurious correlations, and as a result even balancing all single-word features is insufficient for mitigating all of these correlations. In parallel, a truly balanced dataset may be bound to {\textquotedblleft}throw the baby out with the bathwater{\textquotedblright} and miss important signal encoding common sense and world knowledge. We highlight several alternatives to dataset balancing, focusing on enhancing datasets with richer contexts, allowing models to abstain and interact with users, and turning from large-scale fine-tuning to zero- or few-shot setups.
null
null
10.18653/v1/2022.findings-naacl.168
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,425
inproceedings
ren-zhu-2022-specializing
Specializing Pre-trained Language Models for Better Relational Reasoning via Network Pruning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.169/
Ren, Siyu and Zhu, Kenny
Findings of the Association for Computational Linguistics: NAACL 2022
2195--2207
Pretrained masked language models (PLMs) were shown to be inheriting a considerable amount of relational knowledge from the source corpora. In this paper, we present an in-depth and comprehensive study concerning specializing PLMs into relational models from the perspective of network pruning. We show that it is possible to find subnetworks capable of representing grounded commonsense relations at non-trivial sparsity while being more generalizable than original PLMs in scenarios requiring knowledge of single or multiple commonsense relations.
null
null
10.18653/v1/2022.findings-naacl.169
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,426
inproceedings
wang-etal-2022-d2gclf
{D}2{GCLF}: Document-to-Graph Classifier for Legal Document Classification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.170/
Wang, Qiqi and Zhao, Kaiqi and Amor, Robert and Liu, Benjamin and Wang, Ruofan
Findings of the Association for Computational Linguistics: NAACL 2022
2208--2221
Legal document classification is an essential task in law intelligence to automate the labor-intensive law case filing process. Unlike traditional document classification problems, legal documents should be classified by reasons and facts instead of topics. We propose a Document-to-Graph Classifier (D2GCLF), which extracts facts as relations between key participants in the law case and represents a legal document with four relation graphs. Each graph is responsible for capturing different relations between the litigation participants. We further develop a graph attention network on top of the four relation graphs to classify the legal documents. Experiments on a real-world legal document dataset show that D2GCLF outperforms the state-of-the-art methods in terms of accuracy.
null
null
10.18653/v1/2022.findings-naacl.170
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,427
inproceedings
hu-etal-2022-label
A Label-Aware Autoregressive Framework for Cross-Domain {NER}
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.171/
Hu, Jinpeng and Zhao, He and Guo, Dan and Wan, Xiang and Chang, Tsung-Hui
Findings of the Association for Computational Linguistics: NAACL 2022
2222--2232
Cross-domain named entity recognition (NER) aims to borrow the entity information from the source domain to help the entity recognition in the target domain with limited labeled data. Despite the promising performance of existing approaches, most of them focus on reducing the discrepancy of token representation between source and target domains, while the transfer of the valuable label information is often not explicitly considered or even ignored. Therefore, we propose a novel autoregressive framework to advance cross-domain NER by first enhancing the relationship between labels and tokens and then further improving the transferability of label information. Specifically, we associate each label with an embedding vector, and for each token, we utilize a bidirectional LSTM (Bi-LSTM) to encode the labels of its previous tokens for modeling internal context information and label dependence. Afterward, we propose a Bi-Attention module that merges the token representation from a pre-trained model and the label features from the Bi-LSTM as the label-aware information, which is concatenated to the token representation to facilitate cross-domain NER. In doing so, label information contained in the embedding vectors can be effectively transferred to the target domain, and Bi-LSTM can further model the label relationship among different domains by pre-train and then fine-tune setting. Experimental results on several datasets confirm the effectiveness of our model, where our model achieves significant improvements over the state of the arts.
null
null
10.18653/v1/2022.findings-naacl.171
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,428
inproceedings
seo-etal-2022-dog
A Dog Is Passing Over The Jet? A Text-Generation Dataset for {K}orean Commonsense Reasoning and Evaluation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.172/
Seo, Jaehyung and Lee, Seounghoon and Park, Chanjun and Jang, Yoonna and Moon, Hyeonseok and Eo, Sugyeong and Koo, Seonmin and Lim, Heuiseok
Findings of the Association for Computational Linguistics: NAACL 2022
2233--2249
Recent natural language understanding (NLU) research on the Korean language has been vigorously maturing with the advancements of pretrained language models and datasets. However, Korean pretrained language models still struggle to generate a short sentence with a given condition based on compositionality and commonsense reasoning (i.e., generative commonsense reasoning). The two major challenges are inadequate data resources to develop generative commonsense reasoning regarding Korean linguistic features and to evaluate language models which are necessary for natural language generation (NLG). To solve these problems, we propose a text-generation dataset for Korean generative commonsense reasoning and language model evaluation. In this work, a semi-automatic dataset construction approach filters out contents inexplicable to commonsense, ascertains quality, and reduces the cost of building the dataset. We also present an in-depth analysis of the generation results of language models with various evaluation metrics along with human-annotated scores. The whole dataset is publicly available at (\url{https://aihub.or.kr/opendata/korea-university}).
null
null
10.18653/v1/2022.findings-naacl.172
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,429
inproceedings
zhou-feng-2022-improve
Improve Discourse Dependency Parsing with Contextualized Representations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.173/
Zhou, Yifei and Feng, Yansong
Findings of the Association for Computational Linguistics: NAACL 2022
2250--2261
Previous works show that discourse analysis benefits from modeling intra- and inter-sentential levels separately, where proper representations for text units of different granularities are desired to capture both the information of the text units and their relation to the context. In this paper, we propose to take advantage of transformers to encode different contextualized representations of units of different levels to dynamically capture the information required for discourse dependency analysis on intra- and inter-sentential levels. Motivated by the observation of writing patterns shared across articles to improve discourse analysis, we propose to design sequence labeling methods to take advantage of such structural information from the context that substantially outperforms traditional direct classification methods. Experiments show that our model achieves state-of-the-art results on both English and Chinese datasets.
null
null
10.18653/v1/2022.findings-naacl.173
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,430
inproceedings
wang-etal-2022-list
{L}i{ST}: Lite Prompted Self-training Makes Parameter-efficient Few-shot Learners
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.174/
Wang, Yaqing and Mukherjee, Subhabrata and Liu, Xiaodong and Gao, Jing and Awadallah, Ahmed and Gao, Jianfeng
Findings of the Association for Computational Linguistics: NAACL 2022
2262--2281
We present a new method LiST for efficient fine-tuning of large pre-trained language models (PLMs) in few-shot learning settings. LiST improves over recent methods that adopt prompt-based fine-tuning (FN) using two key techniques. The first is the use of self-training to leverage large amounts of unlabeled data for prompt-based FN in few-shot settings. We use self-training in conjunction with meta-learning for re-weighting noisy pseudo-prompt labels. Traditionally, self-training is expensive as it requires updating all the model parameters repetitively. Therefore, we use a second technique for light-weight fine-tuning where we introduce a small number of task-specific parameters that are fine-tuned during self-training while keeping the PLM encoder frozen. Our experiments show that LiST can effectively leverage unlabeled data to improve the model performance for few-shot learning. Additionally, the finetuning process is efficient as it only updates a small percentage of the parameters and the overall model footprint is reduced since several tasks can share a common PLM encoder as backbone. We present a comprehensive study on six NLU tasks to validate the effectiveness of LiST. The results show that LiST improves by 35{\%} over classic fine-tuning methods and 6{\%} over prompt-based FN with 96{\%} reduction in number of trainable parameters when fine-tuned with no more than 30 labeled examples from each task. With only 14M tunable parameters, LiST outperforms GPT-3 in-context learning by 33{\%} on few-shot NLU tasks
null
null
10.18653/v1/2022.findings-naacl.174
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,431
inproceedings
li-etal-2022-clmlf
{CLMLF}:A Contrastive Learning and Multi-Layer Fusion Method for Multimodal Sentiment Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.175/
Li, Zhen and Xu, Bing and Zhu, Conghui and Zhao, Tiejun
Findings of the Association for Computational Linguistics: NAACL 2022
2282--2294
Compared with unimodal data, multimodal data can provide more features to help the model analyze the sentiment of data. Previous research works rarely consider token-level feature fusion, and few works explore learning the common features related to sentiment in multimodal data to help the model fuse multimodal features. In this paper, we propose a Contrastive Learning and Multi-Layer Fusion (CLMLF) method for multimodal sentiment detection. Specifically, we first encode text and image to obtain hidden representations, and then use a multi-layer fusion module to align and fuse the token-level features of text and image. In addition to the sentiment analysis task, we also designed two contrastive learning tasks, label based contrastive learning and data based contrastive learning tasks, which will help the model learn common features related to sentiment in multimodal data. Extensive experiments conducted on three publicly available multimodal datasets demonstrate the effectiveness of our approach for multimodal sentiment detection compared with existing methods. The codes are available for use at https: //github.com/Link-Li/CLMLF
null
null
10.18653/v1/2022.findings-naacl.175
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,432
inproceedings
zeng-etal-2022-weakly
Weakly Supervised Text Classification using Supervision Signals from a Language Model
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.176/
Zeng, Ziqian and Ni, Weimin and Fang, Tianqing and Li, Xiang and Zhao, Xinran and Song, Yangqiu
Findings of the Association for Computational Linguistics: NAACL 2022
2295--2305
Solving text classification in a weakly supervised manner is important for real-world applications where human annotations are scarce. In this paper, we propose to query a masked language model with cloze style prompts to obtain supervision signals. We design a prompt which combines the document itself and {\textquotedblleft}this article is talking about [MASK].{\textquotedblright} A masked language model can generate words for the [MASK] token. The generated words which summarize the content of a document can be utilized as supervision signals. We propose a latent variable model to learn a word distribution learner which associates generated words to pre-defined categories and a document classifier simultaneously without using any annotated data. Evaluation on three datasets, AGNews, 20Newsgroups, and UCINews, shows that our method can outperform baselines by 2{\%}, 4{\%}, and 3{\%}.
null
null
10.18653/v1/2022.findings-naacl.176
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,433
inproceedings
zhong-etal-2022-analytical
Analytical Reasoning of Text
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.177/
Zhong, Wanjun and Wang, Siyuan and Tang, Duyu and Xu, Zenan and Guo, Daya and Chen, Yining and Wang, Jiahai and Yin, Jian and Zhou, Ming and Duan, Nan
Findings of the Association for Computational Linguistics: NAACL 2022
2306--2319
Analytical reasoning is an essential and challenging task that requires a system to analyze a scenario involving a set of particular circumstances and perform reasoning over it to make conclusions. However, current neural models with implicit reasoning ability struggle to solve this task. In this paper, we study the challenge of analytical reasoning of text and collect a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016. We analyze what knowledge understanding and reasoning abilities are required to do well on this task, and present an approach dubbed ARM. It extracts knowledge such as participants and facts from the context. Such knowledge are applied to an inference engine to deduce legitimate solutions for drawing conclusions. In our experiments, we find that ubiquitous pre-trained models struggle to deal with this task as their performance is close to random guess. Results show that ARM outperforms pre-trained models significantly. Moreover, we demonstrate that ARM has better explicit interpretable reasoning ability.
null
null
10.18653/v1/2022.findings-naacl.177
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,434
inproceedings
hu-etal-2022-denoising
Denoising Neural Network for News Recommendation with Positive and Negative Implicit Feedback
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.178/
Hu, Yunfan and Qiu, Zhaopeng and Wu, Xian
Findings of the Association for Computational Linguistics: NAACL 2022
2320--2329
News recommendation is different from movie or e-commercial recommendation as people usually do not grade the news. Therefore, user feedback for news is always implicit (click behavior, reading time, etc). Inevitably, there are noises in implicit feedback. On one hand, the user may exit immediately after clicking the news as he dislikes the news content, leaving the noise in his positive implicit feedback; on the other hand, the user may be recommended multiple interesting news at the same time and only click one of them, producing the noise in his negative implicit feedback. Opposite implicit feedback could construct more integrated user preferences and help each other to minimize the noise influence. Previous works on news recommendation only used positive implicit feedback and suffered from the noise impact. In this paper, we propose a denoising neural network for news recommendation with positive and negative implicit feedback, named DRPN. DRPN utilizes both feedback for recommendation with a module to denoise both positive and negative implicit feedback to further enhance the performance. Experiments on the real-world large-scale dataset demonstrate the state-of-the-art performance of DRPN.
null
null
10.18653/v1/2022.findings-naacl.178
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,435
inproceedings
wu-etal-2022-continual
Continual Machine Reading Comprehension via Uncertainty-aware Fixed Memory and Adversarial Domain Adaptation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.179/
Wu, Zhijing and Xu, Hua and Fang, Jingliang and Gao, Kai
Findings of the Association for Computational Linguistics: NAACL 2022
2330--2339
Continual Machine Reading Comprehension aims to incrementally learn from a continuous data stream across time without access the previous seen data, which is crucial for the development of real-world MRC systems. However, it is a great challenge to learn a new domain incrementally without catastrophically forgetting previous knowledge. In this paper, MA-MRC, a continual MRC model with uncertainty-aware fixed Memory and Adversarial domain adaptation, is proposed. In MA-MRC, a fixed size memory stores a small number of samples in previous domain data along with an uncertainty-aware updating strategy when new domain data arrives. For incremental learning, MA-MRC not only keeps a stable understanding by learning both memory and new domain data, but also makes full use of the domain adaptation relationship between them by adversarial learning strategy. The experimental results show that MA-MRC is superior to strong baselines and has a substantial incremental learning ability without catastrophically forgetting under two different continual MRC settings.
null
null
10.18653/v1/2022.findings-naacl.179
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,436
inproceedings
xu-zhao-2022-jointly
Jointly Learning Guidance Induction and Faithful Summary Generation via Conditional Variational Autoencoders
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.180/
Xu, Wang and Zhao, Tiejun
Findings of the Association for Computational Linguistics: NAACL 2022
2340--2350
Abstractive summarization can generate high quality results with the development of the neural network. However, generating factual consistency summaries is a challenging task for abstractive summarization. Recent studies extract the additional information with off-the-shelf tools from the source document as a clue to guide the summary generation, which shows effectiveness to improve the faithfulness. Unlike these work, we present a novel framework based on conditional variational autoencoders, which induces the guidance information and generates the summary equipment with the guidance synchronously. Experiments on XSUM and CNNDM dataset show that our approach can generate relevant and fluent summaries which is more faithful than the existing state-of-the-art approaches, according to multiple factual consistency metrics.
null
null
10.18653/v1/2022.findings-naacl.180
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,437
inproceedings
snell-etal-2022-context
Context-Aware Language Modeling for Goal-Oriented Dialogue Systems
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.181/
Snell, Charlie and Yang, Sherry and Fu, Justin and Su, Yi and Levine, Sergey
Findings of the Association for Computational Linguistics: NAACL 2022
2351--2366
Goal-oriented dialogue systems face a trade-off between fluent language generation and task-specific control. While supervised learning with large language models is capable of producing realistic text, how to steer such responses towards completing a specific task without sacrificing language quality remains an open question. In this work, we formulate goal-oriented dialogue as a partially observed Markov decision process, interpreting the language model as a representation of both the dynamics and the policy. This view allows us to extend techniques from learning-based control, such as task relabeling, to derive a simple and effective method to finetune language models in a goal-aware way, leading to significantly improved task performance. We additionally introduce a number of training strategies that serve to better focus the model on the task at hand. We evaluate our method, Context-Aware Language Models (CALM), on a practical flight-booking task using AirDialogue. Empirically, CALM outperforms the state-of-the-art method by 7{\%} in terms of task success, matching human-level task performance.
null
null
10.18653/v1/2022.findings-naacl.181
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,438
inproceedings
shuster-etal-2022-state
Am {I} Me or You? State-of-the-Art Dialogue Models Cannot Maintain an Identity
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.182/
Shuster, Kurt and Urbanek, Jack and Szlam, Arthur and Weston, Jason
Findings of the Association for Computational Linguistics: NAACL 2022
2367--2387
State-of-the-art dialogue models still often stumble with regards to factual accuracy and self-contradiction. Anecdotally, they have been observed to fail to maintain character identity throughout discourse; and more specifically, may take on the role of their interlocutor. In this work we formalize and quantify this deficiency, and show experimentally through human evaluations that this is indeed a problem. In contrast, we show that discriminative models trained specifically to recognize who is speaking can perform well; and further, these can be used as automated metrics. Finally, we evaluate a wide variety of mitigation methods, including changes to model architecture, training protocol, and decoding strategy. Our best models reduce mistaken identity issues by nearly 65{\%} according to human annotators, while simultaneously improving engagingness. Despite these results, we find that maintaining character identity still remains a challenging problem.
null
null
10.18653/v1/2022.findings-naacl.182
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,439
inproceedings
zhu-hauff-2022-unsupervised
Unsupervised Domain Adaptation for Question Generation with {D}omain{D}ata Selection and Self-training
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.183/
Zhu, Peide and Hauff, Claudia
Findings of the Association for Computational Linguistics: NAACL 2022
2388--2401
Question generation (QG) approaches based on large neural models require (i) large-scale and (ii) high-quality training data. These two requirements pose difficulties for specific application domains where training data is expensive and difficult to obtain. The trained QG models' effectiveness can degrade significantly when they are applied on a different domain due to domain shift. In this paper, we explore an \textit{unsupervised domain adaptation} approach to combat the lack of training data and domain shift issue with domain data selection and self-training. We first present a novel answer-aware strategy for domain data selection to select data with the most similarity to a new domain. The selected data are then used as pseudo-in-domain data to retrain the QG model. We then present generation confidence guided self-training with two generation confidence modeling methods (i) generated questions' perplexity and (ii) the fluency score. We test our approaches on three large public datasets with different domain similarities, using a transformer-based pre-trained QG model. The results show that our proposed approaches outperform the baselines, and show the viability of unsupervised domain adaptation with answer-aware data selection and self-training on the QG task.
null
null
10.18653/v1/2022.findings-naacl.183
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,440
inproceedings
huber-etal-2022-ccqa
{CCQA}: A New Web-Scale Question Answering Dataset for Model Pre-Training
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.184/
Huber, Patrick and Aghajanyan, Armen and Oguz, Barlas and Okhonko, Dmytro and Yih, Scott and Gupta, Sonal and Chen, Xilun
Findings of the Association for Computational Linguistics: NAACL 2022
2402--2420
We propose a novel open-domain question-answering dataset based on the Common Crawl project. With a previously unseen number of around 130 million multilingual question-answer pairs (including about 60 million English data-points), we use our large-scale, natural, diverse and high-quality corpus to in-domain pre-train popular language models for the task of question-answering. In our experiments, we find that our Common Crawl Question Answering dataset (CCQA) achieves promising results in zero-shot, low resource and fine-tuned settings across multiple tasks, models and benchmarks.
null
null
10.18653/v1/2022.findings-naacl.184
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,441
inproceedings
ippolito-etal-2022-case
The Case for a Single Model that can Both Generate Continuations and Fill-in-the-Blank
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.185/
Ippolito, Daphne and Dugan, Liam and Reif, Emily and Yuan, Ann and Coenen, Andy and Callison-Burch, Chris
Findings of the Association for Computational Linguistics: NAACL 2022
2421--2432
The task of inserting text into a specified position in a passage, known as fill in the blank (FitB), is useful for a variety of applications where writers interact with a natural language generation (NLG) system to craft text. While previous work has tackled this problem with models trained specifically to do fill in the blank, a more useful model is one that can effectively perform {\_}both{\_} FitB and continuation tasks. In this work, we evaluate the feasibility of using a single model to do both tasks. We show that models pre-trained with a FitB-style objective are capable of both tasks, while models pre-trained for continuation are not. Finally, we show how these models can be easily finetuned to allow for fine-grained control over the length and word choice of the generation.
null
null
10.18653/v1/2022.findings-naacl.185
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,442
inproceedings
wang-etal-2022-learning-discriminative
Learning Discriminative Representations for Open Relation Extraction with Instance Ranking and Label Calibration
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.186/
Wang, Shusen and Duan, Bin and Wu, Yanan and Xu, Yajing
Findings of the Association for Computational Linguistics: NAACL 2022
2433--2438
Open relation extraction is the task to extract relational facts without pre-defined relation types from open-domain corpora. However, since there are some hard or semi-hard instances sharing similar context and entity information but belonging to different underlying relation, current OpenRE methods always cluster them into the same relation type. In this paper, we propose a novel method based on Instance Ranking and Label Calibration strategies (IRLC) to learn discriminative representations for open relation extraction. Due to lacking the original instance label, we provide three surrogate strategies to generate the positive, hard negative, and semi-hard negative instances for the original instance. Instance ranking aims to refine the relational feature space by pushing the hard and semi-hard negative instances apart from the original instance with different margins and pulling the original instance and its positive instance together. To refine the cluster probability distributions of these instances, we introduce a label calibration strategy to model the constraint relationship between instances. Experimental results on two public datasets demonstrate that our proposed method can significantly outperform the previous state-of-the-art methods.
null
null
10.18653/v1/2022.findings-naacl.186
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,443
inproceedings
sainz-etal-2022-textual
Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.187/
Sainz, Oscar and Gonzalez-Dios, Itziar and Lopez de Lacalle, Oier and Min, Bonan and Agirre, Eneko
Findings of the Association for Computational Linguistics: NAACL 2022
2439--2455
Recent work has shown that NLP tasks such as Relation Extraction (RE) can be recasted as a Textual Entailment tasks using verbalizations, with strong performance in zero-shot and few-shot settings thanks to pre-trained entailment models. The fact that relations in current RE datasets are easily verbalized casts doubts on whether entailment would be effective in more complex tasks. In this work we show that entailment is also effective in Event Argument Extraction (EAE), reducing the need of manual annotation to 50{\%} and 20{\%} in ACE and WikiEvents, respectively, while achieving the same performance as with full training. More importantly, we show that recasting EAE as entailment alleviates the dependency on schemas, which has been a roadblock for transferring annotations between domains. Thanks to entailment, the multi-source transfer between ACE and WikiEvents further reduces annotation down to 10{\%} and 5{\%} (respectively) of the full training without transfer. Our analysis shows that key to good results is the use of several entailment datasets to pre-train the entailment model. Similar to previous approaches, our method requires a small amount of effort for manual verbalization: only less than 15 minutes per event argument types is needed; comparable results can be achieved from users of different level of expertise.
null
null
10.18653/v1/2022.findings-naacl.187
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,444
inproceedings
wang-etal-2022-rcl
{RCL}: Relation Contrastive Learning for Zero-Shot Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.188/
Wang, Shusen and Zhang, Bosen and Xu, Yajing and Wu, Yanan and Xiao, Bo
Findings of the Association for Computational Linguistics: NAACL 2022
2456--2468
Zero-shot relation extraction aims to identify novel relations which cannot be observed at the training stage. However, it still faces some challenges since the unseen relations of instances are similar or the input sentences have similar entities, the unseen relation representations from different categories tend to overlap and lead to errors. In this paper, we propose a novel Relation Contrastive Learning framework (RCL) to mitigate above two types of similar problems: Similar Relations and Similar Entities. By jointly optimizing a contrastive instance loss with a relation classification loss on seen relations, RCL can learn subtle difference between instances and achieve better separation between different relation categories in the representation space simultaneously. Especially in contrastive instance learning, the dropout noise as data augmentation is adopted to amplify the semantic difference between similar instances without breaking relation representation, so as to promote model to learn more effective representations. Experiments conducted on two well-known datasets show that RCL can significantly outperform previous state-of-the-art methods. Moreover, if the seen relations are insufficient, RCL can also obtain comparable results with the model trained on the full training set, showing the robustness of our approach.
null
null
10.18653/v1/2022.findings-naacl.188
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,445
inproceedings
pham-etal-2022-latent
Latent Group Dropout for Multilingual and Multidomain Machine Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.189/
Pham, Minh-Quang and Yvon, Fran{\c{c}}ois and Crego, Josep
Findings of the Association for Computational Linguistics: NAACL 2022
2469--2481
Multidomain and multilingual machine translation often rely on parameter sharing strategies, where large portions of the network are meant to capture the commonalities of the tasks at hand, while smaller parts are reserved to model the peculiarities of a language or a domain. In adapter-based approaches, these strategies are hardcoded in the network architecture, independent of the similarities between tasks. In this work, we propose a new method to better take advantage of these similarities, using a latent-variable model. We also develop new techniques to train this model end-to-end and report experimental results showing that the learned patterns are both meaningful and yield improved translation performance without any increase of the model size.
null
null
10.18653/v1/2022.findings-naacl.189
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,446
inproceedings
chen-etal-2022-atp
{ATP}: {AMR}ize Then Parse! Enhancing {AMR} Parsing with {P}seudo{AMR}s
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.190/
Chen, Liang and Wang, Peiyi and Xu, Runxin and Liu, Tianyu and Sui, Zhifang and Chang, Baobao
Findings of the Association for Computational Linguistics: NAACL 2022
2482--2496
As Abstract Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing. We find that 1) Semantic role labeling (SRL) and dependency parsing (DP), would bring more performance gain than other tasks e.g. MT and summarization in the text-to-AMR transition even with much less data. 2) To make a better fit for AMR, data from auxiliary tasks should be properly {\textquotedblleft}AMRized{\textquotedblright} to PseudoAMR before training. Knowledge from shallow level parsing tasks can be better transferred to AMR Parsing with structure transform. 3) Intermediate-task learning is a better paradigm to introduce auxiliary tasks to AMR parsing, compared to multitask learning. From an empirical perspective, we propose a principled method to involve auxiliary tasks to boost AMR parsing. Extensive experiments show that our method achieves new state-of-the-art performance on different benchmarks especially in topology-related scores. Code and models are released at \url{https://github.com/PKUnlp-icler/ATP}.
null
null
10.18653/v1/2022.findings-naacl.190
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,447
inproceedings
su-etal-2022-tacl
{T}a{CL}: Improving {BERT} Pre-training with Token-aware Contrastive Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.191/
Su, Yixuan and Liu, Fangyu and Meng, Zaiqiao and Lan, Tian and Shu, Lei and Shareghi, Ehsan and Collier, Nigel
Findings of the Association for Computational Linguistics: NAACL 2022
2497--2507
Masked language models (MLMs) such as BERT have revolutionized the field of Natural Language Understanding in the past few years. However, existing pre-trained MLMs often output an anisotropic distribution of token representations that occupies a narrow subset of the entire representation space. Such token representations are not ideal, especially for tasks that demand discriminative semantic meanings of distinct tokens. In this work, we propose TaCL (Token-aware Contrastive Learning), a novel continual pre-training approach that encourages BERT to learn an isotropic and discriminative distribution of token representations. TaCL is fully unsupervised and requires no additional data. We extensively test our approach on a wide range of English and Chinese benchmarks. The results show that TaCL brings consistent and notable improvements over the original BERT model. Furthermore, we conduct detailed analysis to reveal the merits and inner-workings of our approach.
null
null
10.18653/v1/2022.findings-naacl.191
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,448
inproceedings
chen-etal-2022-mtg
{MTG}: A Benchmark Suite for Multilingual Text Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.192/
Chen, Yiran and Song, Zhenqiao and Wu, Xianze and Wang, Danqing and Xu, Jingjing and Chen, Jiaze and Zhou, Hao and Li, Lei
Findings of the Association for Computational Linguistics: NAACL 2022
2508--2527
We introduce MTG, a new benchmark suite for training and evaluating multilingual text generation. It is the first-proposed multilingual multiway text generation dataset with the largest human-annotated data (400k). It includes four generation tasks (story generation, question generation, title generation and text summarization) across five languages (English, German, French, Spanish and Chinese). The multiway setup enables testing knowledge transfer capabilities for a model across languages and tasks. Using MTG, we train and analyze several popular multilingual generation models from different aspects. Our benchmark suite fosters model performance enhancement with more human-annotated parallel data. It provides comprehensive evaluations with diverse generation scenarios. Code and data are available at \url{https://github.com/zide05/MTG}.
null
null
10.18653/v1/2022.findings-naacl.192
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,449
inproceedings
wolfson-etal-2022-weakly
Weakly Supervised Text-to-{SQL} Parsing through Question Decomposition
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.193/
Wolfson, Tomer and Deutch, Daniel and Berant, Jonathan
Findings of the Association for Computational Linguistics: NAACL 2022
2528--2542
Text-to-SQL parsers are crucial in enabling non-experts to effortlessly query relational data. Training such parsers, by contrast, generally requires expertise in annotating natural language (NL) utterances with corresponding SQL queries. In this work, we propose a weak supervision approach for training text-to-SQL parsers. We take advantage of the recently proposed question meaning representation called QDMR, an intermediate between NL and formal query languages. Given questions, their QDMR structures (annotated by non-experts or automatically predicted), and the answers, we are able to automatically synthesize SQL queries that are used to train text-to-SQL models. We test our approach by experimenting on five benchmark datasets. Our results show that the weakly supervised models perform competitively with those trained on annotated NL-SQL data. Overall, we effectively train text-to-SQL parsers, while using zero SQL annotations.
null
null
10.18653/v1/2022.findings-naacl.193
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,450
inproceedings
lin-etal-2022-detect
Detect Rumors in Microblog Posts for Low-Resource Domains via Adversarial Contrastive Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.194/
Lin, Hongzhan and Ma, Jing and Chen, Liangliang and Yang, Zhiwei and Cheng, Mingfei and Guang, Chen
Findings of the Association for Computational Linguistics: NAACL 2022
2543--2556
Massive false rumors emerging along with breaking news or trending topics severely hinder the truth. Existing rumor detection approaches achieve promising performance on the yesterday`s news, since there is enough corpus collected from the same domain for model training. However, they are poor at detecting rumors about unforeseen events especially those propagated in minority languages due to the lack of training data and prior knowledge (i.e., low-resource regimes). In this paper, we propose an adversarial contrastive learning framework to detect rumors by adapting the features learned from well-resourced rumor data to that of the low-resourced. Our model explicitly overcomes the restriction of domain and/or language usage via language alignment and a novel supervised contrastive training paradigm. Moreover, we develop an adversarial augmentation mechanism to further enhance the robustness of low-resource rumor representation. Extensive experiments conducted on two low-resource datasets collected from real-world microblog platforms demonstrate that our framework achieves much better performance than state-of-the-art methods and exhibits a superior capacity for detecting rumors at early stages.
null
null
10.18653/v1/2022.findings-naacl.194
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,451
inproceedings
rony-etal-2022-dialokg
{D}ialo{KG}: Knowledge-Structure Aware Task-Oriented Dialogue Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.195/
Rony, Md Rashad Al Hasan and Usbeck, Ricardo and Lehmann, Jens
Findings of the Association for Computational Linguistics: NAACL 2022
2557--2571
Task-oriented dialogue generation is challenging since the underlying knowledge is often dynamic and effectively incorporating knowledge into the learning process is hard. It is particularly challenging to generate both human-like and informative responses in this setting. Recent research primarily focused on various knowledge distillation methods where the underlying relationship between the facts in a knowledge base is not effectively captured. In this paper, we go one step further and demonstrate how the structural information of a knowledge graph can improve the system`s inference capabilities. Specifically, we propose DialoKG, a novel task-oriented dialogue system that effectively incorporates knowledge into a language model. Our proposed system views relational knowledge as a knowledge graph and introduces (1) a structure-aware knowledge embedding technique, and (2) a knowledge graph-weighted attention masking strategy to facilitate the system selecting relevant information during the dialogue generation. An empirical evaluation demonstrates the effectiveness of DialoKG over state-of-the-art methods on several standard benchmark datasets.
null
null
10.18653/v1/2022.findings-naacl.195
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,452
inproceedings
zhang-etal-2022-zero
Zero-Shot Event Detection Based on Ordered Contrastive Learning and Prompt-Based Prediction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.196/
Zhang, Senhui and Ji, Tao and Ji, Wendi and Wang, Xiaoling
Findings of the Association for Computational Linguistics: NAACL 2022
2572--2580
Event detection is a classic natural language processing task. However, the constantly emerging new events make supervised methods not applicable to unseen types. Previous zero-shot event detection methods either require predefined event types as heuristic rules or resort to external semantic analyzing tools. To overcome this weakness, we propose an end-to-end framework named Zero-Shot Event Detection Based on Ordered Contrastive Learning and Prompt-Based Prediction (ZEOP). By creatively introducing multiple contrastive samples with ordered similarities, the encoder can learn event representations from both instance-level and class-level, which makes the distinctions between different unseen types more significant. Meanwhile, we utilize the prompt-based prediction to identify trigger words without relying on external resources. Experiments demonstrate that our model detects events more effectively and accurately than state-of-the-art methods.
null
null
10.18653/v1/2022.findings-naacl.196
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,453
inproceedings
chen-etal-2022-ketod
{KETOD}: Knowledge-Enriched Task-Oriented Dialogue
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.197/
Chen, Zhiyu and Liu, Bing and Moon, Seungwhan and Sankar, Chinnadhurai and Crook, Paul and Wang, William Yang
Findings of the Association for Computational Linguistics: NAACL 2022
2581--2593
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains. Towards building a human-like assistant that can converse naturally and seamlessly with users, it is important to build a dialogue system that conducts both types of conversations effectively. In this work, we investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model. To this end, we create a new dataset, KETOD (Knowledge-Enriched Task-Oriented Dialogue), where we naturally enrich task-oriented dialogues with chit-chat based on relevant entity knowledge. We also propose two new models, SimpleToDPlus and Combiner, for the proposed task. Experimental results on both automatic and human evaluations show that the proposed methods can significantly improve the performance in knowledge-enriched response generation while maintaining a competitive task-oriented dialog performance. We believe our new dataset will be a valuable resource for future studies. Our dataset and code are publicly available at \url{https://github.com/facebookresearch/ketod}.
null
null
10.18653/v1/2022.findings-naacl.197
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,454
inproceedings
yang-etal-2022-tanet
{TAN}et: {T}hread-{A}ware {P}retraining for {A}bstractive {C}onversational {S}ummarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.198/
Yang, Ze and Wang, Christian and Tian, Zhoujin and Wu, Wei and Li, Zhoujun
Findings of the Association for Computational Linguistics: NAACL 2022
2594--2607
Although pre-trained language models (PLMs) have achieved great success and become a milestone in NLP, abstractive conversational summarization remains a challenging but less studied task. The difficulty lies in two aspects. One is the lack of large-scale conversational summary data. Another is that applying the existing pre-trained models to this task is tricky because of the structural dependence within the conversation and its informal expression, etc. In this work, we first build a large-scale (11M) pretraining dataset called RCSum, based on the multi-person discussions in the Reddit community. We then present TANet, a thread-aware Transformer-based network. Unlike the existing pre-trained models that treat a conversation as a sequence of sentences, we argue that the inherent contextual dependency among the utterances plays an essential role in understanding the entire conversation and thus propose two new techniques to incorporate the structural information into our model. The first is thread-aware attention which is computed by taking into account the contextual dependency within utterances. Second, we apply thread prediction loss to predict the relations between utterances. We evaluate our model on four datasets of real conversations, covering types of meeting transcripts, customer-service records, and forum threads. Experimental results demonstrate that TANet achieves a new state-of-the-art in terms of both automatic evaluation and human judgment.
null
null
10.18653/v1/2022.findings-naacl.198
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,455
inproceedings
fu-etal-2022-adapterbias
{A}dapter{B}ias: Parameter-efficient Token-dependent Representation Shift for Adapters in {NLP} Tasks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.199/
Fu, Chin-Lun and Chen, Zih-Ching and Lee, Yun-Ru and Lee, Hung-yi
Findings of the Association for Computational Linguistics: NAACL 2022
2608--2621
Transformer-based pre-trained models with millions of parameters require large storage. Recent approaches tackle this shortcoming by training adapters, but these approaches still require a relatively large number of parameters. In this study, AdapterBias, a surprisingly simple yet effective adapter architecture, is proposed. AdapterBias adds a token-dependent shift to the hidden output of transformer layers to adapt to downstream tasks with only a vector and a linear layer. Extensive experiments are conducted to demonstrate the effectiveness of AdapterBias. The experiments show that our proposed method can dramatically reduce the trainable parameters compared to the previous works with a minimal decrease in task performances compared with fine-tuned pre-trained models. We further find that AdapterBias automatically learns to assign more significant representation shifts to the tokens related to the task in consideration.
null
null
10.18653/v1/2022.findings-naacl.199
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,456
inproceedings
lin-etal-2022-bridging
Bridging the Gap between Training and Inference: Multi-Candidate Optimization for Diverse Neural Machine Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.200/
Lin, Huan and Yang, Baosong and Yao, Liang and Liu, Dayiheng and Zhang, Haibo and Xie, Jun and Zhang, Min and Su, Jinsong
Findings of the Association for Computational Linguistics: NAACL 2022
2622--2632
Diverse NMT aims at generating multiple diverse yet faithful translations given a source sentence. In this paper, we investigate a common shortcoming in existing diverse NMT studies: the model is usually trained with single reference, while expected to generate multiple candidate translations in inference. The discrepancy between training and inference enlarges the confidence variance and quality gap among candidate translations and thus hinders model performance. To deal with this defect, we propose a multi-candidate optimization framework for diverse NMT. Specifically, we define assessments to score the diversity and the quality of candidate translations during training, and optimize the diverse NMT model with two strategies based on reinforcement learning, namely hard constrained training and soft constrained training. We conduct experiments on NIST Chinese-English and WMT14 English-German translation tasks. The results illustrate that our framework is transparent to basic diverse NMT models, and universally makes better trade-off between diversity and quality. Our source codeis available at \url{https://github.com/DeepLearnXMU/MultiCanOptim}.
null
null
10.18653/v1/2022.findings-naacl.200
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,457
inproceedings
liu-chen-2022-learning
Learning from Bootstrapping and Stepwise Reinforcement Reward: A Semi-Supervised Framework for Text Style Transfer
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.201/
Liu, Zhengyuan and Chen, Nancy
Findings of the Association for Computational Linguistics: NAACL 2022
2633--2648
Text style transfer is an important task in controllable language generation. Supervised approaches have pushed performance improvement on style-oriented rewriting such as formality conversion. However, challenges remain due to the scarcity of large-scale parallel data in many domains. While unsupervised approaches do not rely on annotated sentence pairs for each style, they are often plagued with instability issues such as mode collapse or quality degradation. To take advantage of both supervised and unsupervised paradigms and tackle the challenges, in this work, we propose a semi-supervised framework for text style transfer. First, the learning process is bootstrapped with supervision guided by automatically constructed pseudo-parallel pairs using lexical and semantic-based methods. Then the model learns from unlabeled data via reinforcement rewards. Specifically, we propose to improve the sequence-to-sequence policy gradient via stepwise reward optimization, providing fine-grained learning signals and stabilizing the reinforced learning process. Experimental results show that the proposed approach achieves state-of-the-art performance on multiple datasets, and produces effective generation with as minimal as 10{\%} of training data.
null
null
10.18653/v1/2022.findings-naacl.201
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,458
inproceedings
zeng-etal-2022-ea2e
{EA}$^2${E}: Improving Consistency with Event Awareness for Document-Level Argument Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.202/
Zeng, Qi and Zhan, Qiusi and Ji, Heng
Findings of the Association for Computational Linguistics: NAACL 2022
2649--2655
Events are inter-related in documents. Motivated by the one-sense-per-discourse theory, we hypothesize that a participant tends to play consistent roles across multiple events in the same document. However recent work on document-level event argument extraction models each individual event in isolation and therefore causes inconsistency among extracted arguments across events, which will further cause discrepancy for downstream applications such as event knowledge base population, question answering, and hypothesis generation. In this work, we formulate event argument consistency as the constraints from event-event relations under the document-level setting. To improve consistency we introduce the Event-Aware Argument Extraction (EA$^2$E) model with augmented context for training and inference. Experiment results on WIKIEVENTS and ACE2005 datasets demonstrate the effectiveness of EA$^2$E compared to baseline methods.
null
null
10.18653/v1/2022.findings-naacl.202
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,459
inproceedings
ying-etal-2022-label
Label Refinement via Contrastive Learning for Distantly-Supervised Named Entity Recognition
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.203/
Ying, Huaiyuan and Luo, Shengxuan and Dang, Tiantian and Yu, Sheng
Findings of the Association for Computational Linguistics: NAACL 2022
2656--2666
Distantly-supervised named entity recognition (NER) locates and classifies entities using only knowledge bases and unlabeled corpus to mitigate the reliance on human-annotated labels. The distantly annotated data suffer from the noise in labels, and previous works on DSNER have proved the importance of pre-refining distant labels with hand-crafted rules and extra existing semantic information. In this work, we explore the way to directly learn the distant label refinement knowledge by imitating annotations of different qualities and comparing these annotations in contrastive learning frameworks. the proposed distant label refinement model can give modified suggestions on distant data without additional supervised labels, and thus reduces the requirement on the quality of the knowledge bases. We perform extensive experiments and observe that recent and state-of-the-art DSNER methods gain evident benefits with our method.
null
null
10.18653/v1/2022.findings-naacl.203
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,460
inproceedings
fan-etal-2022-negative
Negative Sample is Negative in Its Own Way: Tailoring Negative Sentences for Image-Text Retrieval
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.204/
Fan, Zhihao and Wei, Zhongyu and Li, Zejun and Wang, Siyuan and Huang, Xuanjing and Fan, Jianqing
Findings of the Association for Computational Linguistics: NAACL 2022
2667--2678
Matching model is essential for Image-Text Retrieval framework. Existing research usually train the model with a triplet loss and explore various strategy to retrieve hard negative sentences in the dataset. We argue that current retrieval-based negative sample construction approach is limited in the scale of the dataset thus fail to identify negative sample of high difficulty for every image. We propose our TAiloring neGative Sentences with Discrimination and Correction (TAGS-DC) to generate synthetic sentences automatically as negative samples. TAGS-DC is composed of masking and refilling to generate synthetic negative sentences with higher difficulty. To keep the difficulty during training, we mutually improve the retrieval and generation through parameter sharing. To further utilize fine-grained semantic of mismatch in the negative sentence, we propose two auxiliary tasks, namely word discrimination and word correction to improve the training. In experiments, we verify the effectiveness of our model on MS-COCO and Flickr30K compared with current state-of-the-art models and demonstrates its robustness and faithfulness in the further analysis.
null
null
10.18653/v1/2022.findings-naacl.204
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,461
inproceedings
cao-etal-2022-explore
Explore More Guidance: A Task-aware Instruction Network for Sign Language Translation Enhanced with Data Augmentation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.205/
Cao, Yong and Li, Wei and Li, Xianzhi and Chen, Min and Chen, Guangyong and Hu, Long and Li, Zhengdao and Hwang, Kai
Findings of the Association for Computational Linguistics: NAACL 2022
2679--2690
Sign language recognition and translation first uses a recognition module to generate glosses from sign language videos and then employs a translation module to translate glosses into spoken sentences. Most existing works focus on the recognition step, while paying less attention to sign language translation. In this work, we propose a task-aware instruction network, namely TIN-SLT, for sign language translation, by introducing the isntruction module and the learning-based feature fuse strategy into a Transformer network. In this way, the pre-trained model`s language ability can be well explored and utilized to further boost the translation performance. Moreover, by exploring the representation space of sign language glosses and target spoken language, we propose a multi-level data augmentation scheme to adjust the data distribution of the training set. We conduct extensive experiments on two challenging benchmark datasets, PHOENIX-2014-T and ASLG-PC12, on which our method outperforms former best solutions by 1.65 and 1.42 in terms of BLEU-4. Our code and trained networks will be available upon the publication of this work.
null
null
10.18653/v1/2022.findings-naacl.205
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,462
inproceedings
wang-etal-2022-rovist
{R}o{V}i{ST}: Learning Robust Metrics for Visual Storytelling
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.206/
Wang, Eileen and Han, Caren and Poon, Josiah
Findings of the Association for Computational Linguistics: NAACL 2022
2691--2702
Visual storytelling (VST) is the task of generating a story paragraph that describes a given image sequence. Most existing storytelling approaches have evaluated their models using traditional natural language generation metrics like BLEU or CIDEr. However, such metrics based on $n$-gram matching tend to have poor correlation with human evaluation scores and do not explicitly consider other criteria necessary for storytelling such as sentence structure or topic coherence. Moreover, a single score is not enough to assess a story as it does not inform us about what specific errors were made by the model. In this paper, we propose 3 evaluation metrics sets that analyses which aspects we would look for in a good story: 1) visual grounding, 2) coherence, and 3) non-redundancy. We measure the reliability of our metric sets by analysing its correlation with human judgement scores on a sample of machine stories obtained from 4 state-of-the-arts models trained on the Visual Storytelling Dataset (VIST). Our metric sets outperforms other metrics on human correlation, and could be served as a learning based evaluation metric set that is complementary to existing rule-based metrics.
null
null
10.18653/v1/2022.findings-naacl.206
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,463
inproceedings
bai-etal-2022-query2particles
{Q}uery2{P}articles: Knowledge Graph Reasoning with Particle Embeddings
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.207/
Bai, Jiaxin and Wang, Zihao and Zhang, Hongming and Song, Yangqiu
Findings of the Association for Computational Linguistics: NAACL 2022
2703--2714
Answering complex logical queries on incomplete knowledge graphs (KGs) with missing edges is a fundamental and important task for knowledge graph reasoning. The query embedding method is proposed to answer these queries by jointly encoding queries and entities to the same embedding space. Then the answer entities are selected according to the similarities between the entity embeddings and the query embedding. As the answers to a complex query are obtained from a combination of logical operations over sub-queries, the embeddings of the answer entities may not always follow a uni-modal distribution in the embedding space. Thus, it is challenging to simultaneously retrieve a set of diverse answers from the embedding space using a single and concentrated query representation such as a vector or a hyper-rectangle. To better cope with queries with diversified answers, we propose Query2Particles (Q2P), a complex KG query answering method. Q2P encodes each query into multiple vectors, named particle embeddings. By doing so, the candidate answers can be retrieved from different areas over the embedding space using the maximal similarities between the entity embeddings and any of the particle embeddings. Meanwhile, the corresponding neural logic operations are defined to support its reasoning over arbitrary first-order logic queries. The experiments show that Query2Particles achieves state-of-the-art performance on the complex query answering tasks on FB15k, FB15K-237, and NELL knowledge graphs.
null
null
10.18653/v1/2022.findings-naacl.207
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,464
inproceedings
tedeschi-etal-2022-id10m
{ID}10{M}: Idiom Identification in 10 Languages
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.findings-naacl.208/
Tedeschi, Simone and Martelli, Federico and Navigli, Roberto
Findings of the Association for Computational Linguistics: NAACL 2022
2715--2726
Idioms are phrases which present a figurative meaning that cannot be (completely) derived by looking at the meaning of their individual components. Identifying and understanding idioms in context is a crucial goal and a key challenge in a wide range of Natural Language Understanding tasks. Although efforts have been undertaken in this direction, the automatic identification and understanding of idioms is still a largely under-investigated area, especially when operating in a multilingual scenario. In this paper, we address such limitations and put forward several new contributions: we propose a novel multilingual Transformer-based system for the identification of idioms; we produce a high-quality automatically-created training dataset in 10 languages, along with a novel manually-curated evaluation benchmark; finally, we carry out a thorough performance analysis and release our evaluation suite at \url{https://github.com/Babelscape/ID10M}.
null
null
10.18653/v1/2022.findings-naacl.208
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,465