entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
chen-gao-2022-curriculum
Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in Natural Language Understanding
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.234/
Chen, Zeming and Gao, Qiyue
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3204--3219
In the age of large transformer language models, linguistic evaluation play an important role in diagnosing models' abilities and limitations on natural language understanding. However, current evaluation methods show some significant shortcomings. In particular, they do not provide insight into how well a language model captures distinct linguistic skills essential for language understanding and reasoning. Thus they fail to effectively map out the aspects of language understanding that remain challenging to existing models, which makes it hard to discover potential limitations in models and datasets. In this paper, we introduce Curriculum as a new format of NLI benchmark for evaluation of broad-coverage linguistic phenomena. Curriculum contains a collection of datasets that covers 36 types of major linguistic phenomena and an evaluation procedure for diagnosing how well a language model captures reasoning skills for distinct types of linguistic phenomena. We show that this linguistic-phenomena-driven benchmark can serve as an effective tool for diagnosing model behavior and verifying model learning quality. In addition, our experiments provide insight into the limitation of existing benchmark datasets and state-of-the-art models that may encourage future research on re-designing datasets, model architectures, and learning objectives.
null
null
10.18653/v1/2022.naacl-main.234
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,933
inproceedings
oota-etal-2022-neural
Neural Language Taskonomy: Which {NLP} Tasks are the most Predictive of f{MRI} Brain Activity?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.235/
Oota, Subba Reddy and Arora, Jashn and Agarwal, Veeral and Marreddy, Mounika and Gupta, Manish and Surampudi, Bapi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3220--3237
Several popular Transformer based language models have been found to be successful for text-driven brain encoding. However, existing literature leverages only pretrained text Transformer models and has not explored the efficacy of task-specific learned Transformer representations. In this work, we explore transfer learning from representations learned for ten popular natural language processing tasks (two syntactic and eight semantic) for predicting brain responses from two diverse datasets: Pereira (subjects reading sentences from paragraphs) and Narratives (subjects listening to the spoken stories). Encoding models based on task features are used to predict activity in different regions across the whole brain. Features from coreference resolution, NER, and shallow syntax parsing explain greater variance for the reading activity. On the other hand, for the listening activity, tasks such as paraphrase generation, summarization, and natural language inference show better encoding performance. Experiments across all 10 task representations provide the following cognitive insights: (i) language left hemisphere has higher predictive brain activity versus language right hemisphere, (ii) posterior medial cortex, temporo-parieto-occipital junction, dorsal frontal lobe have higher correlation versus early auditory and auditory association cortex, (iii) syntactic and semantic tasks display a good predictive performance across brain regions for reading and listening stimuli resp.
null
null
10.18653/v1/2022.naacl-main.235
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,934
inproceedings
ribeiro-etal-2022-factgraph
{F}act{G}raph: Evaluating Factuality in Summarization with Semantic Graph Representations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.236/
Ribeiro, Leonardo F. R. and Liu, Mengwen and Gurevych, Iryna and Dreyer, Markus and Bansal, Mohit
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3238--3253
Despite recent improvements in abstractive summarization, most current approaches generate summaries that are not factually consistent with the source document, severely restricting their trust and usage in real-world applications. Recent works have shown promising improvements in factuality error identification using text or dependency arc entailments; however, they do not consider the entire semantic graph simultaneously. To this end, we propose FactGraph, a method that decomposes the document and the summary into structured meaning representations (MR), which are more suitable for factuality evaluation. MRs describe core semantic concepts and their relations, aggregating the main content in both document and summary in a canonical form, and reducing data sparsity. FactGraph encodes such graphs using a graph encoder augmented with structure-aware adapters to capture interactions among the concepts based on the graph connectivity, along with text representations using an adapter-based text encoder. Experiments on different benchmarks for evaluating factuality show that FactGraph outperforms previous approaches by up to 15{\%}. Furthermore, FactGraph improves performance on identifying content verifiability errors and better captures subsentence-level factual inconsistencies.
null
null
10.18653/v1/2022.naacl-main.236
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,935
inproceedings
lee-etal-2022-unsupervised
Unsupervised Paraphrasability Prediction for Compound Nominalizations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.237/
Lee, John Sie Yuen and Lim, Ho Hung and Webster, Carol
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3254--3263
Commonly found in academic and formal texts, a nominalization uses a deverbal noun to describe an event associated with its corresponding verb. Nominalizations can be difficult to interpret because of ambiguous semantic relations between the deverbal noun and its arguments. Automatic generation of clausal paraphrases for nominalizations can help disambiguate their meaning. However, previous work has not identified cases where it is awkward or impossible to paraphrase a compound nominalization. This paper investigates unsupervised prediction of paraphrasability, which determines whether the prenominal modifier of a nominalization can be re-written as a noun or adverb in a clausal paraphrase. We adopt the approach of overgenerating candidate paraphrases followed by candidate ranking with a neural language model. In experiments on an English dataset, we show that features from an Abstract Meaning Representation graph lead to statistically significant improvement in both paraphrasability prediction and paraphrase generation.
null
null
10.18653/v1/2022.naacl-main.237
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,936
inproceedings
yamada-etal-2022-global
Global Entity Disambiguation with {BERT}
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.238/
Yamada, Ikuya and Washio, Koki and Shindo, Hiroyuki and Matsumoto, Yuji
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3264--3271
We propose a global entity disambiguation (ED) model based on BERT. To capture global contextual information for ED, our model treats not only words but also entities as input tokens, and solves the task by sequentially resolving mentions to their referent entities and using resolved entities as inputs at each step. We train the model using a large entity-annotated corpus obtained from Wikipedia. We achieve new state-of-the-art results on five standard ED datasets: AIDA-CoNLL, MSNBC, AQUAINT, ACE2004, and WNED-WIKI. The source code and model checkpoint are available at \url{https://github.com/studio-ousia/luke}.
null
null
10.18653/v1/2022.naacl-main.238
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,937
inproceedings
huang-etal-2022-clues
Clues Before Answers: Generation-Enhanced Multiple-Choice {QA}
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.239/
Huang, Zixian and Wu, Ao and Zhou, Jiaying and Gu, Yu and Zhao, Yue and Cheng, Gong
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3272--3287
A trending paradigm for multiple-choice question answering (MCQA) is using a text-to-text framework. By unifying data in different tasks into a single text-to-text format, it trains a generative encoder-decoder model which is both powerful and universal. However, a side effect of twisting a generation target to fit the classification nature of MCQA is the under-utilization of the decoder and the knowledge that can be decoded. To exploit the generation capability and underlying knowledge of a pre-trained encoder-decoder model, in this paper, we propose a generation-enhanced MCQA model named GenMC. It generates a clue from the question and then leverages the clue to enhance a reader for MCQA. It outperforms text-to-text models on multiple MCQA datasets.
null
null
10.18653/v1/2022.naacl-main.239
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,938
inproceedings
liu-etal-2022-towards-efficient
Towards Efficient {NLP}: A Standard Evaluation and A Strong Baseline
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.240/
Liu, Xiangyang and Sun, Tianxiang and He, Junliang and Wu, Jiawen and Wu, Lingling and Zhang, Xinyu and Jiang, Hao and Cao, Zhao and Huang, Xuanjing and Qiu, Xipeng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3288--3303
Supersized pre-trained language models have pushed the accuracy of various natural language processing (NLP) tasks to a new state-of-the-art (SOTA). Rather than pursuing the reachless SOTA accuracy, more and more researchers start paying attention to model efficiency and usability. Different from accuracy, the metric for efficiency varies across different studies, making them hard to be fairly compared. To that end, this work presents ELUE (Efficient Language Understanding Evaluation), a standard evaluation, and a public leaderboard for efficient NLP models. ELUE is dedicated to depicting the Pareto Frontier for various language understanding tasks, such that it can tell whether and how much a method achieves Pareto improvement. Along with the benchmark, we also release a strong baseline, ElasticBERT, which allows BERT to exit at any layer in both static and dynamic ways. We demonstrate the ElasticBERT, despite its simplicity, outperforms or performs on par with SOTA compressed and early exiting models. With ElasticBERT, the proposed ELUE has a strong Pareto Frontier and makes a better evaluation for efficient NLP models.
null
null
10.18653/v1/2022.naacl-main.240
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,939
inproceedings
sun-etal-2022-stylized
Stylized Knowledge-Grounded Dialogue Generation via Disentangled Template Rewriting
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.241/
Sun, Qingfeng and Xu, Can and Hu, Huang and Wang, Yujing and Miao, Jian and Geng, Xiubo and Chen, Yining and Xu, Fei and Jiang, Daxin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3304--3318
Current Knowledge-Grounded Dialogue Generation (KDG) models specialize in producing rational and factual responses. However, to establish long-term relationships with users, the KDG model needs the capability to generate responses in a desired style or attribute. Thus, we study a new problem: Stylized Knowledge-Grounded Dialogue Generation (SKDG). It presents two challenges: (1) How to train a SKDG model where no {\ensuremath{<}}context, knowledge, stylized response{\ensuremath{>}} triples are available. (2) How to cohere with context and preserve the knowledge when generating a stylized response. In this paper, we propose a novel disentangled template rewriting (DTR) method which generates responses via combing disentangled style templates (from monolingual stylized corpus) and content templates (from KDG corpus). The entire framework is end-to-end differentiable and learned without supervision. Extensive experiments on two benchmarks indicate that DTR achieves a significant improvement on all evaluation metrics compared with previous state-of-the-art stylized dialogue generation methods. Besides, DTR achieves comparable performance with the state-of-the-art KDG methods in standard KDG evaluation setting.
null
null
10.18653/v1/2022.naacl-main.241
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,940
inproceedings
wang-etal-2022-luna
{LUNA}: Learning Slot-Turn Alignment for Dialogue State Tracking
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.242/
Wang, Yifan and Zhao, Jing and Bao, Junwei and Duan, Chaoqun and Wu, Youzheng and He, Xiaodong
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3319--3328
Dialogue state tracking (DST) aims to predict the current dialogue state given the dialogue history. Existing methods generally exploit the utterances of all dialogue turns to assign value for each slot. This could lead to suboptimal results due to the information introduced from irrelevant utterances in the dialogue history, which may be useless and can even cause confusion. To address this problem, we propose LUNA, a SLot-TUrN Alignment enhanced approach. It first explicitly aligns each slot with its most relevant utterance, then further predicts the corresponding value based on this aligned utterance instead of all dialogue utterances. Furthermore, we design a slot ranking auxiliary task to learn the temporal correlation among slots which could facilitate the alignment. Comprehensive experiments are conducted on three multi-domain task-oriented dialogue datasets, MultiWOZ 2.0, MultiWOZ 2.1, and MultiWOZ 2.2. The results show that LUNA achieves new state-of-the-art results on these datasets.
null
null
10.18653/v1/2022.naacl-main.242
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,941
inproceedings
chen-etal-2022-crossroads
Crossroads, Buildings and Neighborhoods: A Dataset for Fine-grained Location Recognition
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.243/
Chen, Pei and Xu, Haotian and Zhang, Cheng and Huang, Ruihong
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3329--3339
General domain Named Entity Recognition (NER) datasets like CoNLL-2003 mostly annotate coarse-grained location entities such as a country or a city. But many applications require identifying fine-grained locations from texts and mapping them precisely to geographic sites, e.g., a crossroad, an apartment building, or a grocery store. In this paper, we introduce a new dataset HarveyNER with fine-grained locations annotated in tweets. This dataset presents unique challenges and characterizes many complex and long location mentions in informal descriptions. We built strong baseline models using Curriculum Learning and experimented with different heuristic curricula to better recognize difficult location mentions. Experimental results show that the simple curricula can improve the system`s performance on hard cases and its overall performance, and outperform several other baseline systems. The dataset and the baseline models can be found at \url{https://github.com/brickee/HarveyNER}.
null
null
10.18653/v1/2022.naacl-main.243
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,942
inproceedings
dua-etal-2022-tricks
Tricks for Training Sparse Translation Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.244/
Dua, Dheeru and Bhosale, Shruti and Goswami, Vedanuj and Cross, James and Lewis, Mike and Fan, Angela
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3340--3345
Multi-task learning with an unbalanced data distribution skews model learning towards high resource tasks, especially when model capacity is fixed and fully shared across all tasks. Sparse scaling architectures, such as BASELayers, provide flexible mechanisms for different tasks to have a variable number of parameters, which can be useful to counterbalance skewed data distributions. We find that that sparse architectures for multilingual machine translation can perform poorly out of the box and propose two straightforward techniques to mitigate this {---} a temperature heating mechanism and dense pre-training. Overall, these methods improve performance on two multilingual translation benchmarks compared to standard BASELayers and Dense scaling baselines, and in combination, more than 2x model convergence speed.
null
null
10.18653/v1/2022.naacl-main.244
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,943
inproceedings
zhang-etal-2022-persona
Persona-Guided Planning for Controlling the Protagonist`s Persona in Story Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.245/
Zhang, Zhexin and Wen, Jiaxin and Guan, Jian and Huang, Minlie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3346--3361
Endowing the protagonist with a specific personality is essential for writing an engaging story. In this paper, we aim to control the protagonist`s persona in story generation, i.e., generating a story from a leading context and a persona description, where the protagonist should exhibit the specified personality through a coherent event sequence. Considering that personas are usually embodied implicitly and sparsely in stories, we propose a planning-based generation model named ConPer to explicitly model the relationship between personas and events. ConPer first plans events of the protagonist`s behavior which are motivated by the specified persona through predicting one target sentence, then plans the plot as a sequence of keywords with the guidance of the predicted persona-related events and commonsense knowledge, and finally generates the whole story. Both automatic and manual evaluation results demonstrate that ConPer outperforms state-of-the-art baselines for generating more coherent and persona-controllable stories. Our code is available at \url{https://github.com/thu-coai/ConPer}.
null
null
10.18653/v1/2022.naacl-main.245
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,944
inproceedings
hu-etal-2022-chef
{CHEF}: A Pilot {C}hinese Dataset for Evidence-Based Fact-Checking
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.246/
Hu, Xuming and Guo, Zhijiang and Wu, GuanYu and Liu, Aiwei and Wen, Lijie and Yu, Philip
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3362--3376
The explosion of misinformation spreading in the media ecosystem urges for automated fact-checking. While misinformation spans both geographic and linguistic boundaries, most work in the field has focused on English. Datasets and tools available in other languages, such as Chinese, are limited. In order to bridge this gap, we construct CHEF, the first CHinese Evidence-based Fact-checking dataset of 10K real-world claims. The dataset covers multiple domains, ranging from politics to public health, and provides annotated evidence retrieved from the Internet. Further, we develop established baselines and a novel approach that is able to model the evidence retrieval as a latent variable, allowing jointly training with the veracity prediction model in an end-to-end fashion. Extensive experiments show that CHEF will provide a challenging testbed for the development of fact-checking systems designed to retrieve and reason over non-English claims.
null
null
10.18653/v1/2022.naacl-main.246
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,945
inproceedings
le-etal-2022-vgnmn
{VGNMN}: Video-grounded Neural Module Networks for Video-Grounded Dialogue Systems
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.247/
Le, Hung and Chen, Nancy and Hoi, Steven
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3377--3393
Neural module networks (NMN) have achieved success in image-grounded tasks such as Visual Question Answering (VQA) on synthetic images. However, very limited work on NMN has been studied in the video-grounded dialogue tasks. These tasks extend the complexity of traditional visual tasks with the additional visual temporal variance and language cross-turn dependencies. Motivated by recent NMN approaches on image-grounded tasks, we introduce Video-grounded Neural Module Network (VGNMN) to model the information retrieval process in video-grounded language tasks as a pipeline of neural modules. VGNMN first decomposes all language components in dialogues to explicitly resolve any entity references and detect corresponding action-based inputs from the question. The detected entities and actions are used as parameters to instantiate neural module networks and extract visual cues from the video. Our experiments show that VGNMN can achieve promising performance on a challenging video-grounded dialogue benchmark as well as a video QA benchmark.
null
null
10.18653/v1/2022.naacl-main.247
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,946
inproceedings
le-etal-2022-multimodal
Multimodal Dialogue State Tracking
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.248/
Le, Hung and Chen, Nancy and Hoi, Steven
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3394--3415
Designed for tracking user goals in dialogues, a dialogue state tracker is an essential component in a dialogue system. However, the research of dialogue state tracking has largely been limited to unimodality, in which slots and slot values are limited by knowledge domains (e.g. restaurant domain with slots of restaurant name and price range) and are defined by specific database schema. In this paper, we propose to extend the definition of dialogue state tracking to multimodality. Specifically, we introduce a novel dialogue state tracking task to track the information of visual objects that are mentioned in video-grounded dialogues. Each new dialogue utterance may introduce a new video segment, new visual objects, or new object attributes and a state tracker is required to update these information slots accordingly. We created a new synthetic benchmark and designed a novel baseline, Video-Dialogue Transformer Network (VDTN), for this task. VDTN combines both object-level features and segment-level features and learns contextual dependencies between videos and dialogues to generate multimodal dialogue states. We optimized VDTN for a state generation task as well as a self-supervised video understanding task which recovers video segment or object representations. Finally, we trained VDTN to use the decoded states in a response prediction task. Together with comprehensive ablation and qualitative analysis, we discovered interesting insights towards building more capable multimodal dialogue systems.
null
null
10.18653/v1/2022.naacl-main.248
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,947
inproceedings
wang-etal-2022-use
On the Use of Bert for Automated Essay Scoring: Joint Learning of Multi-Scale Essay Representation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.249/
Wang, Yongjie and Wang, Chuang and Li, Ruobing and Lin, Hui
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3416--3425
In recent years, pre-trained models have become dominant in most natural language processing (NLP) tasks. However, in the area of Automated Essay Scoring (AES), pre-trained models such as BERT have not been properly used to outperform other deep learning models such as LSTM. In this paper, we introduce a novel multi-scale essay representation for BERT that can be jointly learned. We also employ multiple losses and transfer learning from out-of-domain essays to further improve the performance. Experiment results show that our approach derives much benefit from joint learning of multi-scale essay representation and obtains almost the state-of-the-art result among all deep learning models in the ASAP task. Our multi-scale essay representation also generalizes well to CommonLit Readability Prize data set, which suggests that the novel text representation proposed in this paper may be a new and effective choice for long-text tasks.
null
null
10.18653/v1/2022.naacl-main.249
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,948
inproceedings
baumler-rudinger-2022-recognition
Recognition of They/Them as Singular Personal Pronouns in Coreference Resolution
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.250/
Baumler, Connor and Rudinger, Rachel
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3426--3432
As using they/them as personal pronouns becomes increasingly common in English, it is important that coreference resolution systems work as well for individuals who use personal {\textquotedblleft}they{\textquotedblright} as they do for those who use gendered personal pronouns. We introduce a new benchmark for coreference resolution systems which evaluates singular personal {\textquotedblleft}they{\textquotedblright} recognition. Using these WinoNB schemas, we evaluate a number of publicly available coreference resolution systems and confirm their bias toward resolving {\textquotedblleft}they{\textquotedblright} pronouns as plural.
null
null
10.18653/v1/2022.naacl-main.250
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,949
inproceedings
vijayaraghavan-vosoughi-2022-tweetspin
{TWEETSPIN}: Fine-grained Propaganda Detection in Social Media Using Multi-View Representations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.251/
Vijayaraghavan, Prashanth and Vosoughi, Soroush
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3433--3448
Recently, several studies on propaganda detection have involved document and fragment-level analyses of news articles. However, there are significant data and modeling challenges dealing with fine-grained detection of propaganda on social media. In this work, we present TWEETSPIN, a dataset containing tweets that are weakly annotated with different fine-grained propaganda techniques, and propose a neural approach to detect and categorize propaganda tweets across those fine-grained categories. These categories include specific rhetorical and psychological techniques, ranging from leveraging emotions to using logical fallacies. Our model relies on multi-view representations of the input tweet data to (a) extract different aspects of the input text including the context, entities, their relationships, and external knowledge; (b) model their mutual interplay; and (c) effectively speed up the learning process by requiring fewer training examples. Our method allows for representation enrichment leading to better detection and categorization of propaganda on social media. We verify the effectiveness of our proposed method on TWEETSPIN and further probe how the implicit relations between the views impact the performance. Our experiments show that our model is able to outperform several benchmark methods and transfer the knowledge to relatively low-resource news domains.
null
null
10.18653/v1/2022.naacl-main.251
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,950
inproceedings
mireshghallah-etal-2022-useridentifier
{U}ser{I}dentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.252/
Mireshghallah, Fatemehsadat and Shrivastava, Vaishnavi and Shokouhi, Milad and Berg-Kirkpatrick, Taylor and Sim, Robert and Dimitriadis, Dimitrios
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3449--3456
Global models are typically trained to be as generalizable as possible. Invariance to the specific user is considered desirable since models are shared across multitudes of users. However, these models are often unable to produce personalized responses for individual users, based on their data. Contrary to widely-used personalization techniques based on few-shot and meta-learning, we propose UserIdentifier, a novel scheme for training a single shared model for all users. Our approach produces personalized responses by prepending a fixed, user-specific non-trainable string (called {\textquotedblleft}user identifier{\textquotedblright}) to each user`s input text. Unlike prior work, this method doesn`t need any additional model parameters, any extra rounds of personal few-shot learning or any change made to the vocabulary. We empirically study different types of user identifiers (numeric, alphanumeric, and also randomly generated) and demonstrate that, surprisingly, randomly generated user identifiers outperform the prefix-tuning based state-of-the-art approach by up to 13, on a suite of sentiment analysis datasets.
null
null
10.18653/v1/2022.naacl-main.252
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,951
inproceedings
shi-etal-2022-improving
Improving Neural Models for Radiology Report Retrieval with Lexicon-based Automated Annotation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.253/
Shi, Luyao and Syeda-mahmood, Tanveer and Baldwin, Tyler
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3457--3463
Many clinical informatics tasks that are based on electronic health records (EHR) need relevant patient cohorts to be selected based on findings, symptoms and diseases. Frequently, these conditions are described in radiology reports which can be retrieved using information retrieval (IR) methods. The latest of these techniques utilize neural IR models such as BERT trained on clinical text. However, these methods still lack semantic understanding of the underlying clinical conditions as well as ruled out findings, resulting in poor precision during retrieval. In this paper we combine clinical finding detection with supervised query match learning. Specifically, we use lexicon-driven concept detection to detect relevant findings in sentences. These findings are used as queries to train a Sentence-BERT (SBERT) model using triplet loss on matched and unmatched query-sentence pairs. We show that the proposed supervised training task remarkably improves the retrieval performance of SBERT. The trained model generalizes well to unseen queries and reports from different collections.
null
null
10.18653/v1/2022.naacl-main.253
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,952
inproceedings
kasai-etal-2022-transparent
Transparent Human Evaluation for Image Captioning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.254/
Kasai, Jungo and Sakaguchi, Keisuke and Dunagan, Lavinia and Morrison, Jacob and Le Bras, Ronan and Choi, Yejin and Smith, Noah A.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3464--3478
We establish THumB, a rubric-based human evaluation protocol for image captioning models. Our scoring rubrics and their definitions are carefully developed based on machine- and human-generated captions on the MSCOCO dataset. Each caption is evaluated along two main dimensions in a tradeoff (precision and recall) as well as other aspects that measure the text quality (fluency, conciseness, and inclusive language). Our evaluations demonstrate several critical problems of the current evaluation practice. Human-generated captions show substantially higher quality than machine-generated ones, especially in coverage of salient information (i.e., recall), while most automatic metrics say the opposite. Our rubric-based results reveal that CLIPScore, a recent metric that uses image features, better correlates with human judgments than conventional text-only metrics because it is more sensitive to recall. We hope that this work will promote a more transparent evaluation protocol for image captioning and its automatic metrics.
null
null
10.18653/v1/2022.naacl-main.254
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,953
inproceedings
pfeiffer-etal-2022-lifting
Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.255/
Pfeiffer, Jonas and Goyal, Naman and Lin, Xi and Li, Xian and Cross, James and Riedel, Sebastian and Artetxe, Mikel
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3479--3495
Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-Mod) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.
null
null
10.18653/v1/2022.naacl-main.255
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,954
inproceedings
naseem-etal-2022-docamr
{D}oc{AMR}: Multi-Sentence {AMR} Representation and Evaluation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.256/
Naseem, Tahira and Blodgett, Austin and Kumaravel, Sadhana and O{'}Gorman, Tim and Lee, Young-Suk and Flanigan, Jeffrey and Astudillo, Ram{\'o}n and Florian, Radu and Roukos, Salim and Schneider, Nathan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3496--3505
Despite extensive research on parsing of English sentences into Abstract Meaning Representation (AMR) graphs, which are compared to gold graphs via the Smatch metric, full-document parsing into a unified graph representation lacks well-defined representation and evaluation. Taking advantage of a super-sentential level of coreference annotation from previous work, we introduce a simple algorithm for deriving a unified graph representation, avoiding the pitfalls of information loss from over-merging and lack of coherence from under merging. Next, we describe improvements to the Smatch metric to make it tractable for comparing document-level graphs and use it to re-evaluate the best published document-level AMR parser. We also present a pipeline approach combining the top-performing AMR parser and coreference resolution systems, providing a strong baseline for future research.
null
null
10.18653/v1/2022.naacl-main.256
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,955
inproceedings
li-etal-2022-learning-transfer
Learning to Transfer Prompts for Text Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.257/
Li, Junyi and Tang, Tianyi and Nie, Jian-Yun and Wen, Ji-Rong and Zhao, Xin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3506--3518
Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tuning. While, it is challenging to fine-tune PLMs in a data-scarce situation. Therefore, it is non-trivial to develop a general and lightweight model that can adapt to various text generation tasks based on PLMs. To fulfill this purpose, the recent prompt-based learning offers a potential solution. In this paper, we improve this technique and propose a novel prompt-based method (PTG) for text generation in a transferable setting. First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks. To consider both task- and instance-level information, we design an adaptive attention mechanism to derive the target prompts. For each data instance, PTG learns a specific target prompt by attending to highly relevant source prompts. In extensive experiments, PTG yields competitive or better results than fine-tuning methods. We release our source prompts as an open resource, where users can add or reuse them to improve new text generation tasks for future research. Code and data can be available at \url{https://github.com/RUCAIBox/Transfer-Prompts-for-Text-Generation}.
null
null
10.18653/v1/2022.naacl-main.257
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,956
inproceedings
li-etal-2022-eliteplm
{E}lite{PLM}: An Empirical Study on General Language Ability Evaluation of Pretrained Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.258/
Li, Junyi and Tang, Tianyi and Gong, Zheng and Yang, Lixin and Yu, Zhuohao and Chen, Zhipeng and Wang, Jingyuan and Zhao, Xin and Wen, Ji-Rong
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3519--3539
Nowadays, pretrained language models (PLMs) have dominated the majority of NLP tasks. While, little research has been conducted on systematically evaluating the language abilities of PLMs. In this paper, we present a large-scale empirical study on general language ability evaluation of PLMs (ElitePLM). In our study, we design four evaluation dimensions, memory, comprehension, reasoning, and composition, to measure ten widely-used PLMs within five categories. Our empirical results demonstrate that: (1) PLMs with varying training objectives and strategies are good at different ability tests; (2) fine-tuning PLMs in downstream tasks is usually sensitive to the data size and distribution; (3) PLMs have excellent transferability between similar tasks. Moreover, the prediction results of PLMs in our experiments are released as an open resource for more deep and detailed analysis on the language abilities of PLMs. This paper can guide the future work to select, apply, and design PLMs for specific tasks. We have made all the details of experiments publicly available at \url{https://github.com/RUCAIBox/ElitePLM}.
null
null
10.18653/v1/2022.naacl-main.258
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,957
inproceedings
kasai-etal-2022-bidimensional
Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.259/
Kasai, Jungo and Sakaguchi, Keisuke and Le Bras, Ronan and Dunagan, Lavinia and Morrison, Jacob and Fabbri, Alexander and Choi, Yejin and Smith, Noah A.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3540--3557
Natural language processing researchers have identified limitations of evaluation methodology for generation tasks, with new questions raised about the validity of automatic metrics and of crowdworker judgments. Meanwhile, efforts to improve generation models tend to depend on simple n-gram overlap metrics (e.g., BLEU, ROUGE). We argue that new advances on models and metrics should each more directly benefit and inform the other. We therefore propose a generalization of leaderboards, bidimensional leaderboards (Billboards), that simultaneously tracks progress in language generation models and metrics for their evaluation. Unlike conventional unidimensional leaderboards that sort submitted systems by predetermined metrics, a Billboard accepts both generators and evaluation metrics as competing entries. A Billboard automatically creates an ensemble metric that selects and linearly combines a few metrics based on a global analysis across generators. Further, metrics are ranked based on their correlation with human judgments. We release four Billboards for machine translation, summarization, and image captioning. We demonstrate that a linear ensemble of a few diverse metrics sometimes substantially outperforms existing metrics in isolation. Our mixed-effects model analysis shows that most automatic metrics, especially the reference-based ones, overrate machine over human generation, demonstrating the importance of updating metrics as generation models become stronger (and perhaps more similar to humans) in the future.
null
null
10.18653/v1/2022.naacl-main.259
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,958
inproceedings
chen-etal-2022-improving
Improving In-Context Few-Shot Learning via Self-Supervised Training
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.260/
Chen, Mingda and Du, Jingfei and Pasunuru, Ramakanth and Mihaylov, Todor and Iyer, Srini and Stoyanov, Veselin and Kozareva, Zornitsa
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3558--3573
Self-supervised pretraining has made few-shot learning possible for many NLP tasks. But the pretraining objectives are not typically adapted specifically for in-context few-shot learning. In this paper, we propose to use self-supervision in an intermediate training stage between pretraining and downstream few-shot usage with the goal to teach the model to perform in-context few shot learning. We propose and evaluate four self-supervised objectives on two benchmarks. We find that the intermediate self-supervision stage produces models that outperform strong baselines. Ablation study shows that several factors affect the downstream performance, such as the amount of training data and the diversity of the self-supervised objectives. Human-annotated cross-task supervision and self-supervision are complementary. Qualitative analysis suggests that the self-supervised-trained models are better at following task requirements.
null
null
10.18653/v1/2022.naacl-main.260
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,959
inproceedings
park-etal-2022-exposing
Exposing the Limits of Video-Text Models through Contrast Sets
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.261/
Park, Jae Sung and Shen, Sheng and Farhadi, Ali and Darrell, Trevor and Choi, Yejin and Rohrbach, Anna
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3574--3586
Recent video-text models can retrieve relevant videos based on text with a high accuracy, but to what extent do they comprehend the semantics of the text? Can they discriminate between similar entities and actions? To answer this, we propose an evaluation framework that probes video-text models with hard negatives. We automatically build contrast sets, where true textual descriptions are manipulated in ways that change their semantics while maintaining plausibility. Specifically, we leverage a pre-trained language model and a set of heuristics to create verb and person entity focused contrast sets. We apply these in the multiple choice video to-text classification setting. We test the robustness of recent methods on the proposed automatic contrast sets, and compare them to additionally collected human-generated counterparts, to assess their effectiveness. We see that model performance suffers across all methods, erasing the gap between recent CLIP-based methods vs. the earlier methods.
null
null
10.18653/v1/2022.naacl-main.261
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,960
inproceedings
tian-peng-2022-zero
Zero-shot Sonnet Generation with Discourse-level Planning and Aesthetics Features
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.262/
Tian, Yufei and Peng, Nanyun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3587--3597
Poetry generation, and creative language generation in general, usually suffers from the lack of large training data. In this paper, we present a novel framework to generate sonnets that does not require training on poems. We design a hierarchical framework which plans the poem sketch before decoding. Specifically, a content planning module is trained on non-poetic texts to obtain discourse-level coherence; then a rhyme module generates rhyme words and a polishing module introduces imagery and similes for aesthetics purposes. Finally, we design a constrained decoding algorithm to impose the meter-and-rhyme constraint of the generated sonnets. Automatic and human evaluation show that our multi-stage approach without training on poem corpora generates more coherent, poetic, and creative sonnets than several strong baselines.
null
null
10.18653/v1/2022.naacl-main.262
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,961
inproceedings
lalor-etal-2022-benchmarking
Benchmarking Intersectional Biases in {NLP}
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.263/
Lalor, John and Yang, Yi and Smith, Kendall and Forsgren, Nicole and Abbasi, Ahmed
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3598--3609
There has been a recent wave of work assessing the fairness of machine learning models in general, and more specifically, on natural language processing (NLP) models built using machine learning techniques. While much work has highlighted biases embedded in state-of-the-art language models, and more recent efforts have focused on how to debias, research assessing the fairness and performance of biased/debiased models on downstream prediction tasks has been limited. Moreover, most prior work has emphasized bias along a single dimension such as gender or race. In this work, we benchmark multiple NLP models with regards to their fairness and predictive performance across a variety of NLP tasks. In particular, we assess intersectional bias - fairness across multiple demographic dimensions. The results show that while current debiasing strategies fare well in terms of the fairness-accuracy trade-off (generally preserving predictive power in debiased models), they are unable to effectively alleviate bias in downstream tasks. Furthermore, this bias is often amplified across dimensions (i.e., intersections). We conclude by highlighting possible causes and making recommendations for future NLP debiasing research.
null
null
10.18653/v1/2022.naacl-main.263
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,962
inproceedings
deshpande-etal-2022-bert
When is {BERT} Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.264/
Deshpande, Ameet and Talukdar, Partha and Narasimhan, Karthik
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3610--3623
While recent work on multilingual language models has demonstrated their capacity for cross-lingual zero-shot transfer on downstream tasks, there is a lack of consensus in the community as to what shared properties between languages enable such transfer. Analyses involving pairs of natural languages are often inconclusive and contradictory since languages simultaneously differ in many linguistic aspects. In this paper, we perform a large-scale empirical study to isolate the effects of various linguistic properties by measuring zero-shot transfer between four diverse natural languages and their counterparts constructed by modifying aspects such as the script, word order, and syntax. Among other things, our experiments show that the absence of sub-word overlap significantly affects zero-shot transfer when languages differ in their word order, and there is a strong correlation between transfer performance and word embedding alignment between languages (e.g., $\rho_s=0.94$ on the task of NLI). Our results call for focus in multilingual models on explicitly improving word embedding alignment between languages rather than relying on its implicit emergence.
null
null
10.18653/v1/2022.naacl-main.264
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,963
inproceedings
brandl-etal-2022-conservative
How Conservative are Language Models? Adapting to the Introduction of Gender-Neutral Pronouns
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.265/
Brandl, Stephanie and Cui, Ruixiang and S{\o}gaard, Anders
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3624--3630
Gender-neutral pronouns have recently been introduced in many languages to a) include non-binary people and b) as a generic singular. Recent results from psycholinguistics suggest that gender-neutral pronouns (in Swedish) are not associated with human processing difficulties. This, we show, is in sharp contrast with automated processing. We show that gender-neutral pronouns in Danish, English, and Swedish are associated with higher perplexity, more dispersed attention patterns, and worse downstream performance. We argue that such conservativity in language models may limit widespread adoption of gender-neutral pronouns and must therefore be resolved.
null
null
10.18653/v1/2022.naacl-main.265
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,964
inproceedings
khashabi-etal-2022-prompt
Prompt Waywardness: The Curious Case of Discretized Interpretation of Continuous Prompts
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.266/
Khashabi, Daniel and Lyu, Xinxi and Min, Sewon and Qin, Lianhui and Richardson, Kyle and Welleck, Sean and Hajishirzi, Hannaneh and Khot, Tushar and Sabharwal, Ashish and Singh, Sameer and Choi, Yejin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3631--3643
Fine-tuning continuous prompts for target tasks has recently emerged as a compact alternative to full model fine-tuning. Motivated by these promising results, we investigate the feasibility of extracting a discrete (textual) interpretation of continuous prompts that is faithful to the problem they solve. In practice, we observe a {\textquotedblleft}wayward{\textquotedblright} behavior between the task solved by continuous prompts and their nearest neighbor discrete projections: We can find continuous prompts that solve a task while being projected to an arbitrary text (e.g., definition of a different or even a contradictory task), while being within a very small (2{\%}) margin of the best continuous prompt of the same size for the task. We provide intuitions behind this odd and surprising behavior, as well as extensive empirical analyses quantifying the effect of various parameters. For instance, for larger model sizes we observe higher waywardness, i.e, we can find prompts that more closely map to any arbitrary text with a smaller drop in accuracy. These findings have important implications relating to the difficulty of faithfully interpreting continuous prompts and their generalization across models and tasks, providing guidance for future progress in prompting language models.
null
null
10.18653/v1/2022.naacl-main.266
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,965
inproceedings
hsu-horwood-2022-contrastive
Contrastive Representation Learning for Cross-Document Coreference Resolution of Events and Entities
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.267/
Hsu, Benjamin and Horwood, Graham
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3644--3655
Identifying related entities and events within and across documents is fundamental to natural language understanding. We present an approach to entity and event coreference resolution utilizing contrastive representation learning. Earlier state-of-the-art methods have formulated this problem as a binary classification problem and leveraged large transformers in a cross-encoder architecture to achieve their results. For large collections of documents and corresponding set of $n$ mentions, the necessity of performing $n^{2}$ transformer computations in these earlier approaches can be computationally intensive. We show that it is possible to reduce this burden by applying contrastive learning techniques that only require $n$ transformer computations at inference time. Our method achieves state-of-the-art results on a number of key metrics on the ECB+ corpus and is competitive on others.
null
null
10.18653/v1/2022.naacl-main.267
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,966
inproceedings
cui-etal-2022-learning
Learning the Ordering of Coordinate Compounds and Elaborate Expressions in {H}mong, {L}ahu, and {C}hinese
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.268/
Cui, Chenxuan and Zhang, Katherine J. and Mortensen, David
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3656--3669
Coordinate compounds (CCs) and elaborate expressions (EEs) are coordinate constructions common in languages of East and Southeast Asia. Mortensen (2006) claims that (1) the linear ordering of EEs and CCs in Hmong, Lahu, and Chinese can be predicted via phonological hierarchies and (2) that these phonological hierarchies lack a clear phonetic rationale. These claims are significant because morphosyntax has often been seen as in a feed-forward relationship with phonology, and phonological generalizations have often been assumed to be phonetically {\textquotedblleft}natural{\textquotedblright}. We investigate whether the ordering of CCs and EEs can be learned empirically and whether computational models (classifiers and sequence-labeling models) learn unnatural hierarchies similar to those posited by Mortensen (2006). We find that decision trees and SVMs learn to predict the order of CCs/EEs on the basis of phonology, beating strong baselines for all three languages, with DTs learning hierarchies strikingly similar to those proposed by Mortensen. However, we also find that a neural sequence labeling model is able to learn the ordering of elaborate expressions in Hmong very effectively without using any phonological information. We argue that EE ordering can be learned through two independent routes: phonology and lexical distribution, presenting a more nuanced picture than previous work.
null
null
10.18653/v1/2022.naacl-main.268
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,967
inproceedings
iv-etal-2022-fruit
{FRUIT}: Faithfully Reflecting Updated Information in Text
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.269/
Iv, Robert and Passos, Alexandre and Singh, Sameer and Chang, Ming-Wei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3670--3686
Textual knowledge bases such as Wikipedia require considerable effort to keep up to date and consistent. While automated writing assistants could potentially ease this burden, the problem of suggesting edits grounded in external knowledge has been under-explored. In this paper, we introduce the novel generation task of *faithfully reflecting updated information in text* (FRUIT) where the goal is to update an existing article given new evidence. We release the FRUIT-WIKI dataset, a collection of over 170K distantly supervised data produced from pairs of Wikipedia snapshots, along with our data generation pipeline and a gold evaluation set of 914 instances whose edits are guaranteed to be supported by the evidence. We provide benchmark results for popular generation systems as well as EDIT5 {--} a T5-based approach tailored to editing we introduce that establishes the state of the art. Our analysis shows that developing models that can update articles faithfully requires new capabilities for neural generation models, and opens doors to many new applications.
null
null
10.18653/v1/2022.naacl-main.269
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,968
inproceedings
hung-etal-2022-multi2woz
{M}ulti2{WOZ}: A Robust Multilingual Dataset and Conversational Pretraining for Task-Oriented Dialog
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.270/
Hung, Chia-Chien and Lauscher, Anne and Vuli{\'c}, Ivan and Ponzetto, Simone and Glava{\v{s}}, Goran
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3687--3703
Research on (multi-domain) task-oriented dialog (TOD) has predominantly focused on the English language, primarily due to the shortage of robust TOD datasets in other languages, preventing the systematic investigation of cross-lingual transfer for this crucial NLP application area. In this work, we introduce Multi2WOZ, a new multilingual multi-domain TOD dataset, derived from the well-established English dataset MultiWOZ, that spans four typologically diverse languages: Chinese, German, Arabic, and Russian. In contrast to concurrent efforts, Multi2WOZ contains gold-standard dialogs in target languages that are directly comparable with development and test portions of the English dataset, enabling reliable and comparative estimates of cross-lingual transfer performance for TOD. We then introduce a new framework for multilingual conversational specialization of pretrained language models (PrLMs) that aims to facilitate cross-lingual transfer for arbitrary downstream TOD tasks. Using such conversational PrLMs specialized for concrete target languages, we systematically benchmark a number of zero-shot and few-shot cross-lingual transfer approaches on two standard TOD tasks: Dialog State Tracking and Response Retrieval. Our experiments show that, in most setups, the best performance entails the combination of (i) conversational specialization in the target language and (ii) few-shot transfer for the concrete TOD task. Most importantly, we show that our conversational specialization in the target language allows for an exceptionally sample-efficient few-shot transfer for downstream TOD tasks.
null
null
10.18653/v1/2022.naacl-main.270
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,969
inproceedings
sun-etal-2022-chapterbreak
{C}hapter{B}reak: A Challenge Dataset for Long-Range Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.271/
Sun, Simeng and Thai, Katherine and Iyyer, Mohit
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3704--3714
While numerous architectures for long-range language models (LRLMs) have recently been proposed, a meaningful evaluation of their discourse-level language understanding capabilities has not yet followed. To this end, we introduce ChapterBreak, a challenge dataset that provides an LRLM with a long segment from a narrative that ends at a chapter boundary and asks it to distinguish the beginning of the ground-truth next chapter from a set of negative segments sampled from the same narrative. A fine-grained human annotation reveals that our dataset contains many complex types of chapter transitions (e.g., parallel narratives, cliffhanger endings) that require processing global context to comprehend. Experiments on ChapterBreak show that existing LRLMs fail to effectively leverage long-range context, substantially underperforming a segment-level model trained directly for this task. We publicly release our ChapterBreak dataset to spur more principled future research into LRLMs.
null
null
10.18653/v1/2022.naacl-main.271
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,970
inproceedings
santhanam-etal-2022-colbertv2
{C}ol{BERT}v2: Effective and Efficient Retrieval via Lightweight Late Interaction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.272/
Santhanam, Keshav and Khattab, Omar and Saad-Falcon, Jon and Potts, Christopher and Zaharia, Matei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3715--3734
Neural information retrieval (IR) has greatly advanced search and other knowledge-intensive language tasks. While many neural IR methods encode queries and documents into single-vector representations, late interaction models produce multi-vector representations at the granularity of each token and decompose relevance modeling into scalable token-level computations. This decomposition has been shown to make late interaction more effective, but it inflates the space footprint of these models by an order of magnitude. In this work, we introduce ColBERTv2, a retriever that couples an aggressive residual compression mechanism with a denoised supervision strategy to simultaneously improve the quality and space footprint of late interaction. We evaluate ColBERTv2 across a wide range of benchmarks, establishing state-of-the-art quality within and outside the training domain while reducing the space footprint of late interaction models by 6{--}10x.
null
null
10.18653/v1/2022.naacl-main.272
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,971
inproceedings
bartelds-wieling-2022-quantifying
Quantifying Language Variation Acoustically with Few Resources
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.273/
Bartelds, Martijn and Wieling, Martijn
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3735--3741
Deep acoustic models represent linguistic information based on massive amounts of data. Unfortunately, for regional languages and dialects such resources are mostly not available. However, deep acoustic models might have learned linguistic information that transfers to low-resource languages. In this study, we evaluate whether this is the case through the task of distinguishing low-resource (Dutch) regional varieties. By extracting embeddings from the hidden layers of various wav2vec 2.0 models (including new models which are pre-trained and/or fine-tuned on Dutch) and using dynamic time warping, we compute pairwise pronunciation differences averaged over 10 words for over 100 individual dialects from four (regional) languages. We then cluster the resulting difference matrix in four groups and compare these to a gold standard, and a partitioning on the basis of comparing phonetic transcriptions. Our results show that acoustic models outperform the (traditional) transcription-based approach without requiring phonetic transcriptions, with the best performance achieved by the multilingual XLSR-53 model fine-tuned on Dutch. On the basis of only six seconds of speech, the resulting clustering closely matches the gold standard.
null
null
10.18653/v1/2022.naacl-main.273
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,972
inproceedings
moosavi-etal-2022-adaptable
Adaptable Adapters
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.274/
Moosavi, Nafise and Delfosse, Quentin and Kersting, Kristian and Gurevych, Iryna
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3742--3753
State-of-the-art pretrained NLP models contain a hundred million to trillion parameters. Adapters provide a parameter-efficient alternative for the full finetuning in which we can only finetune lightweight neural network layers on top of pretrained weights. Adapter layers are initialized randomly. However, existing work uses the same adapter architecture{---}i.e., the same adapter layer on top of each layer of the pretrained model{---}for every dataset, regardless of the properties of the dataset or the amount of available training data. In this work, we introduce adaptable adapters that contain (1) learning different activation functions for different layers and different input data, and (2) a learnable switch to select and only use the beneficial adapter layers. We show that adaptable adapters achieve on-par performances with the standard adapter architecture while using a considerably smaller number of adapter layers. In addition, we show that the selected adapter architecture by adaptable adapters transfers well across different data settings and similar tasks. We propose to use adaptable adapters for designing efficient and effective adapter architectures. The resulting adapters (a) contain about 50{\%} of the learning parameters of the standard adapter and are therefore more efficient at training and inference, and require less storage space, and (b) achieve considerably higher performances in low-data settings.
null
null
10.18653/v1/2022.naacl-main.274
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,973
inproceedings
bartolo-etal-2022-models
Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.275/
Bartolo, Max and Thrush, Tristan and Riedel, Sebastian and Stenetorp, Pontus and Jia, Robin and Kiela, Douwe
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3754--3767
In Dynamic Adversarial Data Collection (DADC), human annotators are tasked with finding examples that models struggle to predict correctly. Models trained on DADC-collected training data have been shown to be more robust in adversarial and out-of-domain settings, and are considerably harder for humans to fool. However, DADC is more time-consuming than traditional data collection and thus more costly per annotated example. In this work, we examine whether we can maintain the advantages of DADC, without incurring the additional cost. To that end, we introduce Generative Annotation Assistants (GAAs), generator-in-the-loop models that provide real-time suggestions that annotators can either approve, modify, or reject entirely. We collect training datasets in twenty experimental settings and perform a detailed analysis of this approach for the task of extractive question answering (QA) for both standard and adversarial data collection. We demonstrate that GAAs provide significant efficiency benefits with over a 30{\%} annotation speed-up, while leading to over a 5x improvement in model fooling rates. In addition, we find that using GAA-assisted training data leads to higher downstream model performance on a variety of question answering tasks over adversarial data collection.
null
null
10.18653/v1/2022.naacl-main.275
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,974
inproceedings
cao-etal-2022-gmn
{GMN}: Generative Multi-modal Network for Practical Document Information Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.276/
Cao, Haoyu and Ma, Jiefeng and Guo, Antai and Hu, Yiqing and Liu, Hao and Jiang, Deqiang and Liu, Yinsong and Ren, Bo
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3768--3778
Document Information Extraction (DIE) has attracted increasing attention due to its various advanced applications in the real world. Although recent literature has already achieved competitive results, these approaches usually fail when dealing with complex documents with noisy OCR results or mutative layouts. This paper proposes Generative Multi-modal Network (GMN) for real-world scenarios to address these problems, which is a robust multi-modal generation method without predefined label categories. With the carefully designed spatial encoder and modal-aware mask module, GMN can deal with complex documents that are hard to serialized into sequential order. Moreover, GMN tolerates errors in OCR results and requires no character-level annotation, which is vital because fine-grained annotation of numerous documents is laborious and even requires annotators with specialized domain knowledge. Extensive experiments show that GMN achieves new state-of-the-art performance on several public DIE datasets and surpasses other methods by a large margin, especially in realistic scenes.
null
null
10.18653/v1/2022.naacl-main.276
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,975
inproceedings
shao-etal-2022-one
One Reference Is Not Enough: Diverse Distillation with Reference Selection for Non-Autoregressive Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.277/
Shao, Chenze and Wu, Xuanfu and Feng, Yang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3779--3791
Non-autoregressive neural machine translation (NAT) suffers from the multi-modality problem: the source sentence may have multiple correct translations, but the loss function is calculated only according to the reference sentence. Sequence-level knowledge distillation makes the target more deterministic by replacing the target with the output from an autoregressive model. However, the multi-modality problem in the distilled dataset is still nonnegligible. Furthermore, learning from a specific teacher limits the upper bound of the model capability, restricting the potential of NAT models. In this paper, we argue that one reference is not enough and propose diverse distillation with reference selection (DDRS) for NAT. Specifically, we first propose a method called SeedDiv for diverse machine translation, which enables us to generate a dataset containing multiple high-quality reference translations for each source sentence. During the training, we compare the NAT output with all references and select the one that best fits the NAT output to train the model. Experiments on widely-used machine translation benchmarks demonstrate the effectiveness of DDRS, which achieves 29.82 BLEU with only one decoding pass on WMT14 En-De, improving the state-of-the-art performance for NAT by over 1 BLEU.
null
null
10.18653/v1/2022.naacl-main.277
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,976
inproceedings
chen-etal-2022-rationalization
Can Rationalization Improve Robustness?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.278/
Chen, Howard and He, Jacqueline and Narasimhan, Karthik and Chen, Danqi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3792--3805
A growing line of work has investigated the development of neural NLP models that can produce rationales{--}subsets of input that can explain their model predictions. In this paper, we ask whether such rationale models can provide robustness to adversarial attacks in addition to their interpretable nature. Since these models need to first generate rationales ({\textquotedblleft}rationalizer{\textquotedblright}) before making predictions ({\textquotedblleft}predictor{\textquotedblright}), they have the potential to ignore noise or adversarially added text by simply masking it out of the generated rationale. To this end, we systematically generate various types of {\textquoteleft}AddText' attacks for both token and sentence-level rationalization tasks and perform an extensive empirical evaluation of state-of-the-art rationale models across five different tasks. Our experiments reveal that the rationale models promise to improve robustness over AddText attacks while they struggle in certain scenarios{--}when the rationalizer is sensitive to position bias or lexical choices of attack text. Further, leveraging human rationale as supervision does not always translate to better performance. Our study is a first step towards exploring the interplay between interpretability and robustness in the rationalize-then-predict framework.
null
null
10.18653/v1/2022.naacl-main.278
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,977
inproceedings
ma-etal-2022-effectiveness
On the Effectiveness of Sentence Encoding for Intent Detection Meta-Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.279/
Ma, Tingting and Wu, Qianhui and Yu, Zhiwei and Zhao, Tiejun and Lin, Chin-Yew
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3806--3818
Recent studies on few-shot intent detection have attempted to formulate the task as a meta-learning problem, where a meta-learning model is trained with a certain capability to quickly adapt to newly specified few-shot tasks with potentially unseen intent categories. Prototypical networks have been commonly used in this setting, with the hope that good prototypical representations could be learned to capture the semantic similarity between the query and a few labeled instances. This intuition naturally leaves a question of whether or not a good sentence representation scheme could suffice for the task without further domain-specific adaptation. In this paper, we conduct empirical studies on a number of general-purpose sentence embedding schemes, showing that good sentence embeddings without any fine-tuning on intent detection data could produce a non-trivially strong performance. Inspired by the results from our qualitative analysis, we propose a frustratingly easy modification, which leads to consistent improvements over all sentence encoding schemes, including those from the state-of-the-art prototypical network variants with task-specific fine-tuning.
null
null
10.18653/v1/2022.naacl-main.279
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,978
inproceedings
berger-etal-2022-computational
A Computational Acquisition Model for Multimodal Word Categorization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.280/
Berger, Uri and Stanovsky, Gabriel and Abend, Omri and Frermann, Lea
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3819--3835
Recent advances in self-supervised modeling of text and images open new opportunities for computational models of child language acquisition, which is believed to rely heavily on cross-modal signals. However, prior studies has been limited by their reliance on vision models trained on large image datasets annotated with a pre-defined set of depicted object categories. This is (a) not faithful to the information children receive and (b) prohibits the evaluation of such models with respect to category learning tasks, due to the pre-imposed category structure. We address this gap, and present a cognitively-inspired, multimodal acquisition model, trained from image-caption pairs on naturalistic data using cross-modal self-supervision. We show that the model learns word categories and object recognition abilities, and presents trends reminiscent of ones reported in the developmental literature.
null
null
10.18653/v1/2022.naacl-main.280
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,979
inproceedings
raina-gales-2022-residue
Residue-Based Natural Language Adversarial Attack Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.281/
Raina, Vyas and Gales, Mark
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3836--3848
Deep learning based systems are susceptible to adversarial attacks, where a small, imperceptible change at the input alters the model prediction. However, to date the majority of the approaches to detect these attacks have been designed for image processing systems. Many popular image adversarial detection approaches are able to identify adversarial examples from embedding feature spaces, whilst in the NLP domain existing state of the art detection approaches solely focus on input text features, without consideration of model embedding spaces. This work examines what differences result when porting these image designed strategies to Natural Language Processing (NLP) tasks - these detectors are found to not port over well. This is expected as NLP systems have a very different form of input: discrete and sequential in nature, rather than the continuous and fixed size inputs for images. As an equivalent model-focused NLP detection approach, this work proposes a simple sentence-embedding {\textquotedblleft}residue{\textquotedblright} based detector to identify adversarial examples. On many tasks, it out-performs ported image domain detectors and recent state of the art NLP specific detectors.
null
null
10.18653/v1/2022.naacl-main.281
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,980
inproceedings
lee-etal-2022-really
Does it Really Generalize Well on Unseen Data? Systematic Evaluation of Relational Triple Extraction Methods
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.282/
Lee, Juhyuk and Lee, Min-Joong and Yang, June Yong and Yang, Eunho
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3849--3858
The ability to extract entities and their relations from unstructured text is essential for the automated maintenance of large-scale knowledge graphs. To keep a knowledge graph up-to-date, an extractor needs not only the ability to recall the triples it encountered during training, but also the ability to extract the new triples from the context that it has never seen before. In this paper, we show that although existing extraction models are able to easily memorize and recall already seen triples, they cannot generalize effectively for unseen triples. This alarming observation was previously unknown due to the composition of the test sets of the go-to benchmark datasets, which turns out to contain only 2{\%} unseen data, rendering them incapable to measure the generalization performance. To separately measure the generalization performance from the memorization performance, we emphasize unseen data by rearranging datasets, sifting out training instances, or augmenting test sets. In addition to that, we present a simple yet effective augmentation technique to promote generalization of existing extraction models, and experimentally confirm that the proposed method can significantly increase the generalization performance of existing models.
null
null
10.18653/v1/2022.naacl-main.282
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,981
inproceedings
fang-etal-2022-spoken
From spoken dialogue to formal summary: An utterance rewriting for dialogue summarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.283/
Fang, Yue and Zhang, Hainan and Chen, Hongshen and Ding, Zhuoye and Long, Bo and Lan, Yanyan and Zhou, Yanquan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3859--3869
Due to the dialogue characteristics of unstructured contexts and multi-parties with first-person perspective, many successful text summarization works have failed when dealing with dialogue summarization. In dialogue summarization task, the input dialogue is usually spoken style with ellipsis and co-references but the output summaries are more formal and complete. Therefore, the dialogue summarization model should be able to complete the ellipsis content and co-reference information and then produce a suitable summary accordingly. However, the current state-of-the-art models pay more attention on the topic or structure of summary, rather than the consistency of dialogue summary with its input dialogue context, which may suffer from the personal and logical inconsistency problem. In this paper, we propose a new model, named ReWriteSum, to tackle this problem. Firstly, an utterance rewriter is conducted to complete the ellipsis content of dialogue content and then obtain the rewriting utterances. Then, the co-reference data augmentation mechanism is utilized to replace the referential person name with its specific name to enhance the personal information. Finally, the rewriting utterances and the co-reference replacement data are used in the standard BART model. Experimental results on both SAMSum and DialSum datasets show that our ReWriteSum significantly outperforms baseline models, in terms of both metric-based and human evaluations. Further analysis on multi-speakers also shows that ReWriteSum can obtain relatively higher improvement with more speakers, validating the correctness and property of ReWriteSum.
null
null
10.18653/v1/2022.naacl-main.283
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,982
inproceedings
nishikawa-etal-2022-ease
{EASE}: Entity-Aware Contrastive Learning of Sentence Embedding
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.284/
Nishikawa, Sosuke and Ri, Ryokan and Yamada, Ikuya and Tsuruoka, Yoshimasa and Echizen, Isao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3870--3885
We present EASE, a novel method for learning sentence embeddings via contrastive learning between sentences and their related entities. The advantage of using entity supervision is twofold: (1) entities have been shown to be a strong indicator of text semantics and thus should provide rich training signals for sentence embeddings; (2) entities are defined independently of languages and thus offer useful cross-lingual alignment supervision. We evaluate EASE against other unsupervised models both in monolingual and multilingual settings. We show that EASE exhibits competitive or better performance in English semantic textual similarity (STS) and short text clustering (STC) tasks and it significantly outperforms baseline methods in multilingual settings on a variety of tasks. Our source code, pre-trained models, and newly constructed multi-lingual STC dataset are available at \url{https://github.com/studio-ousia/ease}.
null
null
10.18653/v1/2022.naacl-main.284
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,983
inproceedings
zhang-etal-2022-neural
Is Neural Topic Modelling Better than Clustering? An Empirical Study on Clustering with Contextual Embeddings for Topics
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.285/
Zhang, Zihan and Fang, Meng and Chen, Ling and Namazi Rad, Mohammad Reza
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3886--3893
Recent work incorporates pre-trained word embeddings such as BERT embeddings into Neural Topic Models (NTMs), generating highly coherent topics. However, with high-quality contextualized document representations, do we really need sophisticated neural models to obtain coherent and interpretable topics? In this paper, we conduct thorough experiments showing that directly clustering high-quality sentence embeddings with an appropriate word selecting method can generate more coherent and diverse topics than NTMs, achieving also higher efficiency and simplicity.
null
null
10.18653/v1/2022.naacl-main.285
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,984
inproceedings
mao-etal-2022-dynamic
Dynamic Multistep Reasoning based on Video Scene Graph for Video Question Answering
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.286/
Mao, Jianguo and Jiang, Wenbin and Wang, Xiangdong and Feng, Zhifan and Lyu, Yajuan and Liu, Hong and Zhu, Yong
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3894--3904
Existing video question answering (video QA) models lack the capacity for deep video understanding and flexible multistep reasoning. We propose for video QA a novel model which performs dynamic multistep reasoning between questions and videos. It creates video semantic representation based on the video scene graph composed of semantic elements of the video and semantic relations among these elements. Then, it performs multistep reasoning for better answer decision between the representations of the question and the video, and dynamically integrate the reasoning results. Experiments show the significant advantage of the proposed model against previous methods in accuracy and interpretability. Against the existing state-of-the-art model, the proposed model dramatically improves more than $4\%/3.1\%/2\%$ on the three widely used video QA datasets, MSRVTT-QA, MSRVTT multi-choice, and TGIF-QA, and displays better interpretability by backtracing along with the attention mechanisms to the video scene graphs.
null
null
10.18653/v1/2022.naacl-main.286
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,985
inproceedings
honovich-etal-2022-true-evaluating
{TRUE}: Re-evaluating Factual Consistency Evaluation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.287/
Honovich, Or and Aharoni, Roee and Herzig, Jonathan and Taitelbaum, Hagai and Kukliansy, Doron and Cohen, Vered and Scialom, Thomas and Szpektor, Idan and Hassidim, Avinatan and Matias, Yossi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3905--3920
Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability. Automatic factual consistency evaluation may help alleviate this limitation by accelerating evaluation cycles, filtering inconsistent outputs and augmenting training data. While attracting increasing attention, such evaluation metrics are usually developed and evaluated in silo for a single task or dataset, slowing their adoption. Moreover, previous meta-evaluation protocols focused on system-level correlations with human annotations, which leave the example-level accuracy of such metrics unclear. In this work, we introduce TRUE: a comprehensive survey and assessment of factual consistency metrics on a standardized collection of existing texts from diverse tasks, manually annotated for factual consistency. Our standardization enables an example-level meta-evaluation protocol that is more actionable and interpretable than previously reported correlations, yielding clearer quality measures. Across diverse state-of-the-art metrics and 11 datasets we find that large-scale NLI and question generation-and-answering-based approaches achieve strong and complementary results. We recommend those methods as a starting point for model and metric developers, and hope TRUE will foster progress towards even better evaluation methods.
null
null
10.18653/v1/2022.naacl-main.287
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,986
inproceedings
qin-etal-2022-knowledge
Knowledge Inheritance for Pre-trained Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.288/
Qin, Yujia and Lin, Yankai and Yi, Jing and Zhang, Jiajie and Han, Xu and Zhang, Zhengyan and Su, Yusheng and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3921--3937
Recent explorations of large-scale pre-trained language models (PLMs) have revealed the power of PLMs with huge amounts of parameters, setting off a wave of training ever-larger PLMs. However, it requires tremendous computational resources to train a large-scale PLM, which may be practically unaffordable. In addition, existing large-scale PLMs are mainly trained from scratch individually, ignoring that many well-trained PLMs are available. To this end, we explore the question how could existing PLMs benefit training large-scale PLMs in future. Specifically, we introduce a pre-training framework named {\textquotedblleft}knowledge inheritance{\textquotedblright} (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs. Experimental results demonstrate the superiority of KI in training efficiency. We also conduct empirical analyses to explore the effects of teacher PLMs' pre-training settings, including model architecture, pre-training data, etc. Finally, we show that KI could be applied to domain adaptation and knowledge transfer.
null
null
10.18653/v1/2022.naacl-main.288
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,987
inproceedings
gao-etal-2022-bi
{B}i-{S}im{C}ut: A Simple Strategy for Boosting Neural Machine Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.289/
Gao, Pengzhi and He, Zhongjun and Wu, Hua and Wang, Haifeng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3938--3948
We introduce Bi-SimCut: a simple but effective training strategy to boost neural machine translation (NMT) performance. It consists of two procedures: bidirectional pretraining and unidirectional finetuning. Both procedures utilize SimCut, a simple regularization method that forces the consistency between the output distributions of the original and the cutoff sentence pairs. Without leveraging extra dataset via back-translation or integrating large-scale pretrained model, Bi-SimCut achieves strong translation performance across five translation benchmarks (data sizes range from 160K to 20.2M): BLEU scores of 31.16 for $\texttt{en}\rightarrow\texttt{de}$ and 38.37 for $\texttt{de}\rightarrow\texttt{en}$ on the IWSLT14 dataset, 30.78 for $\texttt{en}\rightarrow\texttt{de}$ and 35.15 for $\texttt{de}\rightarrow\texttt{en}$ on the WMT14 dataset, and 27.17 for $\texttt{zh}\rightarrow\texttt{en}$ on the WMT17 dataset. SimCut is not a new method, but a version of Cutoff (Shen et al., 2020) simplified and adapted for NMT, and it could be considered as a perturbation-based method. Given the universality and simplicity of Bi-SimCut and SimCut, we believe they can serve as strong baselines for future NMT research.
null
null
10.18653/v1/2022.naacl-main.289
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,988
inproceedings
su-etal-2022-transferability
On Transferability of Prompt Tuning for Natural Language Processing
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.290/
Su, Yusheng and Wang, Xiaozhi and Qin, Yujia and Chan, Chi-Min and Lin, Yankai and Wang, Huadong and Wen, Kaiyue and Liu, Zhiyuan and Li, Peng and Li, Juanzi and Hou, Lei and Sun, Maosong and Zhou, Jie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3949--3969
Prompt tuning (PT) is a promising parameter-efficient method to utilize extremely large pre-trained language models (PLMs), which can achieve comparable performance to full-parameter fine-tuning by only tuning a few soft prompts. However, PT requires much more training time than fine-tuning. Intuitively, knowledge transfer can help to improve the efficiency. To explore whether we can improve PT via prompt transfer, we empirically investigate the transferability of soft prompts across different downstream tasks and PLMs in this work. We find that (1) in zero-shot setting, trained soft prompts can effectively transfer to similar tasks on the same PLM and also to other PLMs with a cross-model projector trained on similar tasks; (2) when used as initialization, trained soft prompts of similar tasks and projected prompts of other PLMs can significantly accelerate training and also improve the performance of PT. Moreover, to explore what decides prompt transferability, we investigate various transferability indicators and find that the overlapping rate of activated neurons strongly reflects the transferability, which suggests how the prompts stimulate PLMs is essential. Our findings show that prompt transfer is promising for improving PT, and further research shall focus more on prompts' stimulation to PLMs. The source code can be obtained from \url{https://github.com/thunlp/Prompt-Transferability}.
null
null
10.18653/v1/2022.naacl-main.290
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,989
inproceedings
tong-etal-2022-docee
{D}oc{EE}: A Large-Scale and Fine-grained Benchmark for Document-level Event Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.291/
Tong, MeiHan and Xu, Bin and Wang, Shuai and Han, Meihuan and Cao, Yixin and Zhu, Jiangqi and Chen, Siyu and Hou, Lei and Li, Juanzi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3970--3982
Event extraction aims to identify an event and then extract the arguments participating in the event. Despite the great success in sentence-level event extraction, events are more naturally presented in the form of documents, with event arguments scattered in multiple sentences. However, a major barrier to promote document-level event extraction has been the lack of large-scale and practical training and evaluation datasets. In this paper, we present DocEE, a new document-level event extraction dataset including 27,000+ events, 180,000+ arguments. We highlight three features: large-scale manual annotations, fine-grained argument types and application-oriented settings. Experiments show that there is still a big gap between state-of-the-art models and human beings (41{\%} Vs 85{\%} in F1 score), indicating that DocEE is an open issue. DocEE is now available at \url{https://github.com/tongmeihan1995/DocEE.git}.
null
null
10.18653/v1/2022.naacl-main.291
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,990
inproceedings
dutta-chowdhury-etal-2022-towards
Towards Debiasing Translation Artifacts
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.292/
Dutta Chowdhury, Koel and Jalota, Rricha and Espa{\~n}a-Bonet, Cristina and Genabith, Josef
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3983--3991
Cross-lingual natural language processing relies on translation, either by humans or machines, at different levels, from translating training data to translating test sets. However, compared to original texts in the same language, translations possess distinct qualities referred to as translationese. Previous research has shown that these translation artifacts influence the performance of a variety of cross-lingual tasks. In this work, we propose a novel approach to reducing translationese by extending an established bias-removal technique. We use the Iterative Null-space Projection (INLP) algorithm, and show by measuring classification accuracy before and after debiasing, that translationese is reduced at both sentence and word level. We evaluate the utility of debiasing translationese on a natural language inference (NLI) task, and show that by reducing this bias, NLI accuracy improves. To the best of our knowledge, this is the first study to debias translationese as represented in latent embedding space.
null
null
10.18653/v1/2022.naacl-main.292
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,991
inproceedings
minixhofer-etal-2022-wechsel
{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.293/
Minixhofer, Benjamin and Paischer, Fabian and Rekabsaz, Navid
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3992--4006
Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.
null
null
10.18653/v1/2022.naacl-main.293
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,992
inproceedings
wang-etal-2022-new
A New Concept of Knowledge based Question Answering ({KBQA}) System for Multi-hop Reasoning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.294/
Wang, Yu and [email protected], [email protected] and Jin, Hongxia
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4007--4017
Knowledge based question answering (KBQA) is a complex task for natural language understanding. Many KBQA approaches have been proposed in recent years, and most of them are trained based on labeled reasoning path. This hinders the system`s performance as many correct reasoning paths are not labeled as ground truth, and thus they cannot be learned. In this paper, we introduce a new concept of KBQA system which can leverage multiple reasoning paths' information and only requires labeled answer as supervision. We name it as \textbf{M}utliple \textbf{R}easoning \textbf{P}aths KB\textbf{QA} System (MRP-QA). We conduct experiments on several benchmark datasets containing both single-hop simple questions as well as muti-hop complex questions, including WebQuestionSP (WQSP), ComplexWebQuestion-1.1 (CWQ), and PathQuestion-Large (PQL), and demonstrate strong performance.
null
null
10.18653/v1/2022.naacl-main.294
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,993
inproceedings
agarwal-etal-2022-bilingual
Bilingual Tabular Inference: A Case Study on {I}ndic Languages
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.295/
Agarwal, Chaitanya and Gupta, Vivek and Kunchukuttan, Anoop and Shrivastava, Manish
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4018--4037
Existing research on Tabular Natural Language Inference (TNLI) exclusively examines the task in a monolingual setting where the tabular premise and hypothesis are in the same language. However, due to the uneven distribution of text resources on the web across languages, it is common to have the tabular premise in a high resource language and the hypothesis in a low resource language. As a result, we present the challenging task of bilingual Tabular Natural Language Inference (bTNLI), in which the tabular premise and a hypothesis over it are in two separate languages. We construct EI-InfoTabS: an English-Indic bTNLI dataset by translating the textual hypotheses of the English TNLI dataset InfoTabS into eleven major Indian languages. We thoroughly investigate how pre-trained multilingual models learn and perform on EI-InfoTabS. Our study shows that the performance on bTNLI can be close to its monolingual counterpart, with translate-train, translate-test and unified-train being strongly competitive baselines.
null
null
10.18653/v1/2022.naacl-main.295
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,994
inproceedings
yuan-etal-2022-generative
Generative Biomedical Entity Linking via Knowledge Base-Guided Pre-training and Synonyms-Aware Fine-tuning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.296/
Yuan, Hongyi and Yuan, Zheng and Yu, Sheng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4038--4048
Entities lie in the heart of biomedical natural language understanding, and the biomedical entity linking (EL) task remains challenging due to the fine-grained and diversiform concept names. Generative methods achieve remarkable performances in general domain EL with less memory usage while requiring expensive pre-training. Previous biomedical EL methods leverage synonyms from knowledge bases (KB) which is not trivial to inject into a generative method. In this work, we use a generative approach to model biomedical EL and propose to inject synonyms knowledge in it. We propose KB-guided pre-training by constructing synthetic samples with synonyms and definitions from KB and require the model to recover concept names. We also propose synonyms-aware fine-tuning to select concept names for training, and propose decoder prompt and multi-synonyms constrained prefix tree for inference. Our method achieves state-of-the-art results on several biomedical EL tasks without candidate selection which displays the effectiveness of proposed pre-training and fine-tuning strategies. The source code is available at \url{https://github.com/Yuanhy1997/GenBioEL}.
null
null
10.18653/v1/2022.naacl-main.296
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,995
inproceedings
wu-etal-2022-robust
Robust Self-Augmentation for Named Entity Recognition with Meta Reweighting
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.297/
Wu, Linzhi and Xie, Pengjun and Zhou, Jie and Zhang, Meishan and Chunping, Ma and Xu, Guangwei and Zhang, Min
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4049--4060
Self-augmentation has received increasing research interest recently to improve named entity recognition (NER) performance in low-resource scenarios. Token substitution and mixup are two feasible heterogeneous self-augmentation techniques for NER that can achieve effective performance with certain specialized efforts. Noticeably, self-augmentation may introduce potentially noisy augmented data. Prior research has mainly resorted to heuristic rule-based constraints to reduce the noise for specific self-augmentation methods individually. In this paper, we revisit these two typical self-augmentation methods for NER, and propose a unified meta-reweighting strategy for them to achieve a natural integration. Our method is easily extensible, imposing little effort on a specific self-augmentation method. Experiments on different Chinese and English NER benchmarks show that our token substitution and mixup method, as well as their integration, can achieve effective performance improvement. Based on the meta-reweighting mechanism, we can enhance the advantages of the self-augmentation techniques without much extra effort.
null
null
10.18653/v1/2022.naacl-main.297
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,996
inproceedings
eskander-etal-2022-unsupervised
Unsupervised Stem-based Cross-lingual Part-of-Speech Tagging for Morphologically Rich Low-Resource Languages
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.298/
Eskander, Ramy and Lowry, Cass and Khandagale, Sujay and Klavans, Judith and Polinsky, Maria and Muresan, Smaranda
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4061--4072
Unsupervised cross-lingual projection for part-of-speech (POS) tagging relies on the use of parallel data to project POS tags from a source language for which a POS tagger is available onto a target language across word-level alignments. The projected tags then form the basis for learning a POS model for the target language. However, languages with rich morphology often yield sparse word alignments because words corresponding to the same citation form do not align well. We hypothesize that for morphologically complex languages, it is more efficient to use the stem rather than the word as the core unit of abstraction. Our contributions are: 1) we propose an unsupervised stem-based cross-lingual approach for POS tagging for low-resource languages of rich morphology; 2) we further investigate morpheme-level alignment and projection; and 3) we examine whether the use of linguistic priors for morphological segmentation improves POS tagging. We conduct experiments using six source languages and eight morphologically complex target languages of diverse typologies. Our results show that the stem-based approach improves the POS models for all the target languages, with an average relative error reduction of 10.3{\%} in accuracy per target language, and outperforms the word-based approach that operates on three-times more data for about two thirds of the language pairs we consider. Moreover, we show that morpheme-level alignment and projection and the use of linguistic priors for morphological segmentation further improve POS tagging.
null
null
10.18653/v1/2022.naacl-main.298
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,997
inproceedings
shen-etal-2022-optimising
Optimising Equal Opportunity Fairness in Model Training
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.299/
Shen, Aili and Han, Xudong and Cohn, Trevor and Baldwin, Timothy and Frermann, Lea
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4073--4084
Real-world datasets often encode stereotypes and societal biases. Such biases can be implicitly captured by trained models, leading to biased predictions and exacerbating existing societal preconceptions. Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias. However, a disconnect between fairness criteria and training objectives makes it difficult to reason theoretically about the effectiveness of different techniques. In this work, we propose two novel training objectives which directly optimise for the widely-used criterion of \textit{equal opportunity}, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
null
null
10.18653/v1/2022.naacl-main.299
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,998
inproceedings
ren-zhu-2022-leaner
Leaner and Faster: Two-Stage Model Compression for Lightweight Text-Image Retrieval
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.300/
Ren, Siyu and Zhu, Kenny
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4085--4090
Current text-image approaches (e.g., CLIP) typically adopt dual-encoder architecture using pre-trained vision-language representation. However, these models still pose non-trivial memory requirements and substantial incremental indexing time, which makes them less practical on mobile devices. In this paper, we present an effective two-stage framework to compress large pre-trained dual-encoder for lightweight text-image retrieval. The resulting model is smaller (39{\%} of the original), faster (1.6x/2.9x for processing image/text respectively), yet performs on par with or better than the original full model on Flickr30K and MSCOCO benchmarks. We also open-source an accompanying realistic mobile image search application.
null
null
10.18653/v1/2022.naacl-main.300
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,999
inproceedings
you-etal-2022-joint
Joint Learning-based Heterogeneous Graph Attention Network for Timeline Summarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.301/
You, Jingyi and Li, Dongyuan and Kamigaito, Hidetaka and Funakoshi, Kotaro and Okumura, Manabu
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4091--4104
Previous studies on the timeline summarization (TLS) task ignored the information interaction between sentences and dates, and adopted pre-defined unlearnable representations for them. They also considered date selection and event detection as two independent tasks, which makes it impossible to integrate their advantages and obtain a globally optimal summary. In this paper, we present a \textit{joint learning-based heterogeneous graph attention network for TLS} (HeterTls), in which date selection and event detection are combined into a unified framework to improve the extraction accuracy and remove redundant sentences simultaneously. Our heterogeneous graph involves multiple types of nodes, the representations of which are iteratively learned across the heterogeneous graph attention layer. We evaluated our model on four datasets, and found that it significantly outperformed the current state-of-the-art baselines with regard to ROUGE scores and date selection metrics.
null
null
10.18653/v1/2022.naacl-main.301
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,000
inproceedings
zeng-gao-2022-early
{E}arly Rumor Detection Using Neural {H}awkes Process with a New Benchmark Dataset
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.302/
Zeng, Fengzhu and Gao, Wei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4105--4117
Little attention has been paid on EArly Rumor Detection (EARD), and EARD performance was evaluated inappropriately on a few datasets where the actual early-stage information is largely missing. To reverse such situation, we construct BEARD, a new Benchmark dataset for EARD, based on claims from fact-checking websites by trying to gather as many early relevant posts as possible. We also propose HEARD, a novel model based on neural Hawkes process for EARD, which can guide a generic rumor detection model to make timely, accurate and stable predictions. Experiments show that HEARD achieves effective EARD performance on two commonly used general rumor detection datasets and our BEARD dataset.
null
null
10.18653/v1/2022.naacl-main.302
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,001
inproceedings
kim-etal-2022-emp
Emp-{RFT}: Empathetic Response Generation via Recognizing Feature Transitions between Utterances
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.303/
Kim, Wongyu and Ahn, Youbin and Kim, Donghyun and Lee, Kyong-Ho
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4118--4128
Each utterance in multi-turn empathetic dialogues has features such as emotion, keywords, and utterance-level meaning. Feature transitions between utterances occur naturally. However, existing approaches fail to perceive the transitions because they extract features for the context at the coarse-grained level. To solve the above issue, we propose a novel approach of recognizing feature transitions between utterances, which helps understand the dialogue flow and better grasp the features of utterance that needs attention. Also, we introduce a response generation strategy to help focus on emotion and keywords related to appropriate features when generating responses. Experimental results show that our approach outperforms baselines and especially, achieves significant improvements on multi-turn dialogues.
null
null
10.18653/v1/2022.naacl-main.303
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,002
inproceedings
zhang-etal-2022-kcd
{KCD}: Knowledge Walks and Textual Cues Enhanced Political Perspective Detection in News Media
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.304/
Zhang, Wenqian and Feng, Shangbin and Chen, Zilong and Lei, Zhenyu and Li, Jundong and Luo, Minnan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4129--4140
Political perspective detection has become an increasingly important task that can help combat echo chambers and political polarization. Previous approaches generally focus on leveraging textual content to identify stances, while they fail to reason with background knowledge or leverage the rich semantic and syntactic textual labels in news articles. In light of these limitations, we propose KCD, a political perspective detection approach to enable multi-hop knowledge reasoning and incorporate textual cues as paragraph-level labels. Specifically, we firstly generate random walks on external knowledge graphs and infuse them with news text representations. We then construct a heterogeneous information network to jointly model news content as well as semantic, syntactic and entity cues in news articles. Finally, we adopt relational graph neural networks for graph-level representation learning and conduct political perspective detection. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods on two benchmark datasets. We further examine the effect of knowledge walks and textual cues and how they contribute to our approach`s data efficiency.
null
null
10.18653/v1/2022.naacl-main.304
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,003
inproceedings
kim-etal-2022-collective
Collective Relevance Labeling for Passage Retrieval
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.305/
Kim, Jihyuk and Kim, Minsoo and Hwang, Seung-won
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4141--4147
Deep learning for Information Retrieval (IR) requires a large amount of high-quality query-document relevance labels, but such labels are inherently sparse. Label smoothing redistributes some observed probability mass over unobserved instances, often uniformly, uninformed of the true distribution. In contrast, we propose knowledge distillation for informed labeling, without incurring high computation overheads at evaluation time. Our contribution is designing a simple but efficient teacher model which utilizes collective knowledge, to outperform state-of-the-arts distilled from a more complex teacher model. Specifically, we train up to $\times8$ faster than the state-of-the-art teacher, while distilling the rankings better. Our code is publicly available at \url{https://github.com/jihyukkim-nlp/CollectiveKD}.
null
null
10.18653/v1/2022.naacl-main.305
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,004
inproceedings
joshi-etal-2022-cogmen
{COGMEN}: {CO}ntextualized {GNN} based Multimodal Emotion recognitio{N}
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.306/
Joshi, Abhinav and Bhat, Ashwani and Jain, Ayush and Singh, Atin and Modi, Ashutosh
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4148--4164
Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions. During a conversation involving various people, a person`s emotions are influenced by the other speaker`s utterances and their own emotional state over the utterances. In this paper, we propose COntextualized Graph Neural Network based Multi- modal Emotion recognitioN (COGMEN) system that leverages local information (i.e., inter/intra dependency between speakers) and global information (context). The proposed model uses Graph Neural Network (GNN) based architecture to model the complex dependencies (local and global information) in a conversation. Our model gives state-of-the- art (SOTA) results on IEMOCAP and MOSEI datasets, and detailed ablation experiments show the importance of modeling information at both levels.
null
null
10.18653/v1/2022.naacl-main.306
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,005
inproceedings
wu-etal-2022-revisit
Revisit Overconfidence for {OOD} Detection: Reassigned Contrastive Learning with Adaptive Class-dependent Threshold
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.307/
Wu, Yanan and He, Keqing and Yan, Yuanmeng and Gao, QiXiang and Zeng, Zhiyuan and Zheng, Fujia and Zhao, Lulu and Jiang, Huixing and Wu, Wei and Xu, Weiran
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4165--4179
Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a task-oriented dialog system. A key challenge of OOD detection is the overconfidence of neural models. In this paper, we comprehensively analyze overconfidence and classify it into two perspectives: over-confident OOD and in-domain (IND). Then according to intrinsic reasons, we respectively propose a novel reassigned contrastive learning (RCL) to discriminate IND intents for over-confident OOD and an adaptive class-dependent local threshold mechanism to separate similar IND and OOD intents for over-confident IND. Experiments and analyses show the effectiveness of our proposed method for both aspects of overconfidence issues.
null
null
10.18653/v1/2022.naacl-main.307
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,006
inproceedings
yan-etal-2022-aisfg
{AISFG}: Abundant Information Slot Filling Generator
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.308/
Yan, Yang and Ye, Junda and Zhang, Zhongbao and Wang, Liwen
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4180--4187
As an essential component of task-oriented dialogue systems, slot filling requires enormous labeled training data in a certain domain. However, in most cases, there is little or no target domain training data is available in the training stage. Thus, cross-domain slot filling has to cope with the data scarcity problem by zero/few-shot learning. Previous researches on zero/few-shot cross-domain slot filling focus on slot descriptions and examples while ignoring the slot type ambiguity and example ambiguity issues. To address these problems, we propose Abundant Information Slot Filling Generator (AISFG), a generative model with a novel query template that incorporates domain descriptions, slot descriptions, and examples with context. Experimental results show that our model outperforms state-of-the-art approaches in zero/few-shot slot filling task.
null
null
10.18653/v1/2022.naacl-main.308
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,007
inproceedings
truong-etal-2022-improving
Improving negation detection with negation-focused pre-training
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.309/
Truong, Thinh and Baldwin, Timothy and Cohn, Trevor and Verspoor, Karin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4188--4193
Negation is a common linguistic feature that is crucial in many language understanding tasks, yet it remains a hard problem due to diversity in its expression in different types of text. Recent works show that state-of-the-art NLP models underperform on samples containing negation in various tasks, and that negation detection models do not transfer well across domains. We propose a new negation-focused pre-training strategy, involving targeted data augmentation and negation masking, to better incorporate negation information into language models. Extensive experiments on common benchmarks show that our proposed approach improves negation detection performance and generalizability over the strong baseline NegBERT (Khandelwal and Sawant, 2020).
null
null
10.18653/v1/2022.naacl-main.309
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,008
inproceedings
kumar-etal-2022-practice
Practice Makes a Solver Perfect: Data Augmentation for Math Word Problem Solvers
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.310/
Kumar, Vivek and Maheshwary, Rishabh and Pudi, Vikram
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4194--4206
Existing Math Word Problem (MWP) solvers have achieved high accuracy on benchmark datasets. However, prior works have shown that such solvers do not generalize well and rely on superficial cues to achieve high performance. In this paper, we first conduct experiments to showcase that this behaviour is mainly associated with the limited size and diversity present in existing MWP datasets. Next, we propose several data augmentation techniques broadly categorized into Substitution and Paraphrasing based methods. By deploying these methods we increase the size of existing datasets by five folds. Extensive experiments on two benchmark datasets across three state-of-the-art MWP solvers shows that proposed methods increase the generalization and robustness of existing solvers. On average, proposed methods significantly increase the state-of-the-art results by over five percentage points on benchmark datasets. Further, the solvers trained on the augmented dataset performs comparatively better on the challenge test set. We also show the effectiveness of proposed techniques through ablation studies and verify the quality of augmented samples through human evaluation.
null
null
10.18653/v1/2022.naacl-main.310
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,009
inproceedings
chuang-etal-2022-diffcse
{D}iff{CSE}: Difference-based Contrastive Learning for Sentence Embeddings
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.311/
Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Scott and Kim, Yoon and Glass, James
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4207--4218
We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning, which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other {\textquotedblleft}harmful{\textquotedblright} types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
null
null
10.18653/v1/2022.naacl-main.311
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,010
inproceedings
li-etal-2022-generative
Generative Cross-Domain Data Augmentation for Aspect and Opinion Co-Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.312/
Li, Junjie and Yu, Jianfei and Xia, Rui
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4219--4229
As a fundamental task in opinion mining, aspect and opinion co-extraction aims to identify the aspect terms and opinion terms in reviews. However, due to the lack of fine-grained annotated resources, it is hard to train a robust model for many domains. To alleviate this issue, unsupervised domain adaptation is proposed to transfer knowledge from a labeled source domain to an unlabeled target domain. In this paper, we propose a new Generative Cross-Domain Data Augmentation framework for unsupervised domain adaptation. The proposed framework is aimed to generate target-domain data with fine-grained annotation by exploiting the labeled data in the source domain. Specifically, we remove the domain-specific segments in a source-domain labeled sentence, and then use this as input to a pre-trained sequence-to-sequence model BART to simultaneously generate a target-domain sentence and predict the corresponding label for each word. Experimental results on three datasets demonstrate that our approach is more effective than previous domain adaptation methods.
null
null
10.18653/v1/2022.naacl-main.312
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,011
inproceedings
zhong-etal-2022-proqa
{P}ro{QA}: Structural Prompt-based Pre-training for Unified Question Answering
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.313/
Zhong, Wanjun and Gao, Yifan and Ding, Ning and Qin, Yujia and Liu, Zhiyuan and Zhou, Ming and Wang, Jiahai and Yin, Jian and Duan, Nan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4230--4243
Question Answering (QA) is a longstanding challenge in natural language processing. Existing QA works mostly focus on specific question types, knowledge domains, or reasoning skills. The specialty in QA research hinders systems from modeling commonalities between tasks and generalization for wider applications. To address this issue, we present ProQA, a unified QA paradigm that solves various tasks through a single model. ProQA takes a unified structural prompt as the bridge and improves the QA-centric ability by structural prompt-based pre-training. Through a structurally designed prompt-based input schema, ProQA concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task. Furthermore, ProQA is pre-trained with structural prompt-formatted large-scale synthesized corpus, which empowers the model with the commonly-required QA ability. Experimental results on 11 QA benchmarks demonstrate that ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios. Furthermore, ProQA exhibits strong ability in both continual learning and transfer learning by taking the advantages of the structural prompt.
null
null
10.18653/v1/2022.naacl-main.313
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,012
inproceedings
park-caragea-2022-data
A Data Cartography based {M}ix{U}p for Pre-trained Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.314/
Park, Seo Yeon and Caragea, Cornelia
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4244--4250
MixUp is a data augmentation strategy where additional samples are generated during training by combining random pairs of training samples and their labels. However, selecting random pairs is not potentially an optimal choice. In this work, we propose TDMixUp, a novel MixUp strategy that leverages Training Dynamics and allows more informative samples to be combined for generating new data samples. Our proposed TDMixUp first measures confidence, variability, (Swayamdipta et al., 2020), and Area Under the Margin (AUM) (Pleiss et al., 2020) to identify the characteristics of training samples (e.g., as easy-to-learn or ambiguous samples), and then interpolates these characterized samples. We empirically validate that our method not only achieves competitive performance using a smaller subset of the training data compared with strong baselines, but also yields lower expected calibration error on the pre-trained language model, BERT, on both in-domain and out-of-domain settings in a wide range of NLP tasks. We publicly release our code.
null
null
10.18653/v1/2022.naacl-main.314
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,013
inproceedings
yamasaki-2022-grapheme
Grapheme-to-Phoneme Conversion for {T}hai using Neural Regression Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.315/
Yamasaki, Tomohiro
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4251--4255
We propose a novel Thai grapheme-to-phoneme conversion method based on a neural regression model that is trained using neural networks to predict the similarity between a candidate and the correct pronunciation. After generating a set of candidates for an input word or phrase using the orthography rules, this model selects the best-similarity pronunciation from the candidates. This method can be applied to languages other than Thai simply by preparing enough orthography rules, and can reduce the mistakes that neural network models often make. We show that the accuracy of the proposed method is .931, which is comparable to that of encoder-decoder sequence models. We also demonstrate that the proposed method is superior in terms of the difference between correct and predicted pronunciations because incorrect, strange output sometimes occurs when using encoder-decoder sequence models but the error is within the expected range when using the proposed method.
null
null
10.18653/v1/2022.naacl-main.315
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,014
inproceedings
lai-etal-2022-generating
Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.316/
Lai, Siyu and Yang, Zhen and Meng, Fandong and Zhang, Xue and Chen, Yufeng and Xu, Jinan and Zhou, Jie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4256--4266
Generating adversarial examples for Neural Machine Translation (NMT) with single Round-Trip Translation (RTT) has achieved promising results by releasing the meaning-preserving restriction. However, a potential pitfall for this approach is that we cannot decide whether the generated examples are adversarial to the target NMT model or the auxiliary backward one, as the reconstruction error through the RTT can be related to either. To remedy this problem, we propose a new definition for NMT adversarial examples based on the Doubly Round-Trip Translation (DRTT). Specifically, apart from the source-target-source RTT, we also consider the target-source-target one, which is utilized to pick out the authentic adversarial examples for the target NMT model. Additionally, to enhance the robustness of the NMT model, we introduce the masked language models to construct bilingual adversarial pairs based on DRTT, which are used to train the NMT model directly. Extensive experiments on both the clean and noisy test sets (including the artificial and natural noise) show that our approach substantially improves the robustness of NMT models.
null
null
10.18653/v1/2022.naacl-main.316
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,015
inproceedings
sang-etal-2022-tvshowguess
{TVS}how{G}uess: Character Comprehension in Stories as Speaker Guessing
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.317/
Sang, Yisi and Mou, Xiangyang and Yu, Mo and Yao, Shunyu and Li, Jing and Stanton, Jeffrey
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4267--4287
We propose a new task for assessing machines' skills of understanding fictional characters in narrative stories. The task, TVShowGuess, builds on the scripts of TV series and takes the form of guessing the anonymous main characters based on the backgrounds of the scenes and the dialogues. Our human study supports that this form of task covers comprehension of multiple types of character persona, including understanding characters' personalities, facts and memories of personal experience, which are well aligned with the psychological and literary theories about the theory of mind (ToM) of human beings on understanding fictional characters during reading. We further propose new model architectures to support the contextualized encoding of long scene texts. Experiments show that our proposed approaches significantly outperform baselines, yet still largely lag behind the (nearly perfect) human performance. Our work serves as a first step toward the goal of narrative character comprehension.
null
null
10.18653/v1/2022.naacl-main.317
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,016
inproceedings
wu-etal-2022-causal
Causal Distillation for Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.318/
Wu, Zhengxuan and Geiger, Atticus and Rozner, Joshua and Kreiss, Elisa and Lu, Hanson and Icard, Thomas and Potts, Christopher and Goodman, Noah
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4288--4295
Distillation efforts have led to language models that are more compact and efficient without serious drops in performance. The standard approach to distillation trains a student model against two objectives: a task-specific objective (e.g., language modeling) and an imitation objective that encourages the hidden states of the student model to be similar to those of the larger teacher model. In this paper, we show that it is beneficial to augment distillation with a third objective that encourages the student to imitate the \textit{causal} dynamics of the teacher through a distillation interchange intervention training objective (DIITO). DIITO pushes the student model to become a \textit{causal abstraction} of the teacher model {--} a faithful model with simpler causal structure. DIITO is fully differentiable, easily implemented, and combines flexibly with other objectives. Compared against standard distillation with the same setting, DIITO results in lower perplexity on the WikiText-103M corpus (masked language modeling) and marked improvements on the GLUE benchmark (natural language understanding), SQuAD (question answering), and CoNLL-2003 (named entity recognition).
null
null
10.18653/v1/2022.naacl-main.318
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,017
inproceedings
lee-thorp-etal-2022-fnet
{FN}et: Mixing Tokens with {F}ourier Transforms
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.319/
Lee-Thorp, James and Ainslie, Joshua and Eckstein, Ilya and Ontanon, Santiago
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4296--4313
We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that {\textquotedblleft}mix{\textquotedblright} input tokens. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97{\%} of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80{\%} faster on GPUs and 70{\%} faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the {\textquotedblleft}efficient Transformers{\textquotedblright} on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.
null
null
10.18653/v1/2022.naacl-main.319
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,018
inproceedings
zhou-etal-2022-answer
Answer Consolidation: Formulation and Benchmarking
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.320/
Zhou, Wenxuan and Ning, Qiang and Elfardy, Heba and Small, Kevin and Chen, Muhao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4314--4325
Current question answering (QA) systems primarily consider the single-answer scenario, where each question is assumed to be paired with one correct answer. However, in many real-world QA applications, multiple answer scenarios arise where consolidating answers into a comprehensive and non-redundant set of answers is a more efficient user interface. In this paper, we formulate the problem of answer consolidation, where answers are partitioned into multiple groups, each representing different aspects of the answer set. Then, given this partitioning, a comprehensive and non-redundant set of answers can be constructed by picking one answer from each group. To initiate research on answer consolidation, we construct a dataset consisting of 4,699 questions and 24,006 sentences and evaluate multiple models. Despite a promising performance achieved by the best-performing supervised models, we still believe this task has room for further improvements.
null
null
10.18653/v1/2022.naacl-main.320
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,019
inproceedings
eisenstein-2022-informativeness
Informativeness and Invariance: Two Perspectives on Spurious Correlations in Natural Language
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.321/
Eisenstein, Jacob
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4326--4331
Spurious correlations are a threat to the trustworthiness of natural language processing systems, motivating research into methods for identifying and eliminating them. However, addressing the problem of spurious correlations requires more clarity on what they are and how they arise in language data. Gardner et al (2021) argue that the compositional nature of language implies that \textit{all} correlations between labels and individual {\textquotedblleft}input features{\textquotedblright} are spurious. This paper analyzes this proposal in the context of a toy example, demonstrating three distinct conditions that can give rise to feature-label correlations in a simple PCFG. Linking the toy example to a structured causal model shows that (1) feature-label correlations can arise even when the label is invariant to interventions on the feature, and (2) feature-label correlations may be absent even when the label is sensitive to interventions on the feature. Because input features will be individually correlated with labels in all but very rare circumstances, domain knowledge must be applied to identify spurious correlations that pose genuine robustness threats.
null
null
10.18653/v1/2022.naacl-main.321
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,020
inproceedings
dou-peng-2022-foam
{FOAM}: A Follower-aware Speaker Model For Vision-and-Language Navigation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.322/
Dou, Zi-Yi and Peng, Nanyun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4332--4340
The speaker-follower models have proven to be effective in vision-and-language navigation, where a speaker model is used to synthesize new instructions to augment the training data for a follower navigation model. However, in previous work, the speaker model is follower-agnostic and fails to take the state of the follower into consideration. In this paper, we present FOAM, a FOllower-Aware speaker Model that is constantly updated given the follower feedback, so that the generated instructions can be more suitable to the current learning state of the follower. Specifically, we optimize the speaker using a bi-level optimization framework and obtain its training signals by evaluating the follower on labeled data. Experimental results on the Room-to-Room and Room-across-Room datasets demonstrate that our methods can outperform strong baseline models across settings. Analyses also reveal that our generated instructions are of higher quality than the baselines.
null
null
10.18653/v1/2022.naacl-main.322
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,021
inproceedings
qiu-etal-2022-improving
Improving Compositional Generalization with Latent Structure and Data Augmentation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.323/
Qiu, Linlu and Shaw, Peter and Pasupat, Panupong and Nowak, Pawel and Linzen, Tal and Sha, Fei and Toutanova, Kristina
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4341--4362
Generic unstructured neural networks have been shown to struggle on out-of-distribution compositional generalization. Compositional data augmentation via example recombination has transferred some prior knowledge about compositionality to such black-box neural models for several semantic parsing tasks, but this often required task-specific engineering or provided limited gains. We present a more powerful data recombination method using a model called Compositional Structure Learner (CSL). CSL is a generative model with a quasi-synchronous context-free grammar backbone, which we induce from the training data. We sample recombined examples from CSL and add them to the fine-tuning data of a pre-trained sequence-to-sequence model (T5). This procedure effectively transfers most of CSL`s compositional bias to T5 for diagnostic tasks, and results in a model even stronger than a T5-CSL ensemble on two real world compositional generalization tasks. This results in new state-of-the-art performance for these challenging semantic parsing tasks requiring generalization to both natural language variation and novel compositions of elements.
null
null
10.18653/v1/2022.naacl-main.323
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,022
inproceedings
nguyen-etal-2022-joint
Joint Extraction of Entities, Relations, and Events via Modeling Inter-Instance and Inter-Label Dependencies
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.324/
Nguyen, Minh Van and Min, Bonan and Dernoncourt, Franck and Nguyen, Thien
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4363--4374
Event trigger detection, entity mention recognition, event argument extraction, and relation extraction are the four important tasks in information extraction that have been performed jointly (Joint Information Extraction - JointIE) to avoid error propagation and leverage dependencies between the task instances (i.e., event triggers, entity mentions, relations, and event arguments). However, previous JointIE models often assume heuristic manually-designed dependency between the task instances and mean-field factorization for the joint distribution of instance labels, thus unable to capture optimal dependencies among instances and labels to improve representation learning and IE performance. To overcome these limitations, we propose to induce a dependency graph among task instances from data to boost representation learning. To better capture dependencies between instance labels, we propose to directly estimate their joint distribution via Conditional Random Fields. Noise Contrastive Estimation is introduced to address the maximization of the intractable joint likelihood for model training. Finally, to improve the decoding with greedy or beam search in prior work, we present Simulated Annealing to better find the globally optimal assignment for instance labels at decoding time. Experimental results show that our proposed model outperforms previous models on multiple IE tasks across 5 datasets and 2 languages.
null
null
10.18653/v1/2022.naacl-main.324
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,023
inproceedings
prange-etal-2022-linguistic
Linguistic Frameworks Go Toe-to-Toe at Neuro-Symbolic Language Modeling
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.325/
Prange, Jakob and Schneider, Nathan and Kong, Lingpeng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4375--4391
We examine the extent to which, in principle, different syntactic and semantic graph representations can complement and improve neural language modeling. Specifically, by conditioning on a subgraph encapsulating the locally relevant sentence history, can a model make better next-word predictions than a pretrained sequential language model alone? With an ensemble setup consisting of GPT-2 and ground-truth graphs from one of 7 different formalisms, we find that the graph information indeed improves perplexity and other metrics. Moreover, this architecture provides a new way to compare different frameworks of linguistic representation. In our oracle graph setup, training and evaluating on English WSJ, semantic constituency structures prove most useful to language modeling performance{---}outpacing syntactic constituency structures as well as syntactic and semantic dependency structures.
null
null
10.18653/v1/2022.naacl-main.325
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,024
inproceedings
lu-etal-2022-imagination
Imagination-Augmented Natural Language Understanding
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.326/
Lu, Yujie and Zhu, Wanrong and Wang, Xin and Eckstein, Miguel and Wang, William Yang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4392--4402
Human brains integrate linguistic and perceptual information simultaneously to understand natural language, and hold the critical ability to render imaginations. Such abilities enable us to construct new abstract concepts or concrete objects, and are essential in involving practical knowledge to solve problems in low-resource scenarios. However, most existing methods for Natural Language Understanding (NLU) are mainly focused on textual signals. They do not simulate human visual imagination ability, which hinders models from inferring and learning efficiently from limited data samples. Therefore, we introduce an Imagination-Augmented Cross-modal Encoder (iACE) to solve natural language understanding tasks from a novel learning perspective{---}imagination-augmented cross-modal understanding. iACE enables visual imagination with external knowledge transferred from the powerful generative and pre-trained vision-and-language models. Extensive experiments on GLUE and SWAG show that iACE achieves consistent improvement over visually-supervised pre-trained models. More importantly, results in extreme and normal few-shot settings validate the effectiveness of iACE in low-resource natural language understanding circumstances.
null
null
10.18653/v1/2022.naacl-main.326
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,025
inproceedings
brunila-laviolette-2022-company
What company do words keep? Revisiting the distributional semantics of {J}.{R}. Firth {\&} Zellig {H}arris
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.327/
Brunila, Mikael and LaViolette, Jack
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4403--4417
The power of word embeddings is attributed to the linguistic theory that similar words will appear in similar contexts. This idea is specifically invoked by noting that {\textquotedblleft}you shall know a word by the company it keeps,{\textquotedblright} a quote from British linguist J.R. Firth who, along with his American colleague Zellig Harris, is often credited with the invention of {\textquotedblleft}distributional semantics.{\textquotedblright} While both Firth and Harris are cited in all major NLP textbooks and many foundational papers, the content and differences between their theories is seldom discussed. Engaging in a close reading of their work, we discover two distinct and in many ways divergent theories of meaning. One focuses exclusively on the internal workings of linguistic forms, while the other invites us to consider words in new company{---}not just with other linguistic elements, but also in a broader cultural and situational context. Contrasting these theories from the perspective of current debates in NLP, we discover in Firth a figure who could guide the field towards a more culturally grounded notion of semantics. We consider how an expanded notion of {\textquotedblleft}context{\textquotedblright} might be modeled in practice through two different strategies: comparative stratification and syntagmatic extension.
null
null
10.18653/v1/2022.naacl-main.327
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,026
inproceedings
zhao-etal-2022-compositional
Compositional Task-Oriented Parsing as Abstractive Question Answering
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.328/
Zhao, Wenting and Arkoudas, Konstantine and Sun, Weiqi and Cardie, Claire
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4418--4427
Task-oriented parsing (TOP) aims to convert natural language into machine-readable representations of specific tasks, such as setting an alarm. A popular approach to TOP is to apply seq2seq models to generate linearized parse trees. A more recent line of work argues that pretrained seq2seq2 models are better at generating outputs that are themselves natural language, so they replace linearized parse trees with canonical natural-language paraphrases that can then be easily translated into parse trees, resulting in so-called naturalized parsers. In this work we continue to explore naturalized semantic parsing by presenting a general reduction of TOP to abstractive question answering that overcomes some limitations of canonical paraphrasing. Experimental results show that our QA-based technique outperforms state-of-the-art methods in full-data settings while achieving dramatic improvements in few-shot settings.
null
null
10.18653/v1/2022.naacl-main.328
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,027
inproceedings
li-etal-2022-learning-cross
Learning Cross-Lingual {IR} from an {E}nglish Retriever
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.329/
Li, Yulong and Franz, Martin and Sultan, Md Arafat and Iyer, Bhavani and Lee, Young-Suk and Sil, Avirup
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4428--4436
We present DR.DECR (Dense Retrieval with Distillation-Enhanced Cross-Lingual Representation), a new cross-lingual information retrieval (CLIR) system trained using multi-stage knowledge distillation (KD). The teacher of DR.DECR relies on a highly effective but computationally expensive two-stage inference process consisting of query translation and monolingual IR, while the student, DR.DECR, executes a single CLIR step. We teach DR.DECR powerful multilingual representations as well as CLIR by optimizing two corresponding KD objectives. Learning useful representations of non-English text from an English-only retriever is accomplished through a cross-lingual token alignment algorithm that relies on the representation capabilities of the underlying multilingual encoders. In both in-domain and zero-shot out-of-domain evaluation, DR.DECR demonstrates far superior accuracy over direct fine-tuning with labeled CLIR data. It is also the best single-model retriever on the XOR-TyDi benchmark at the time of this writing.
null
null
10.18653/v1/2022.naacl-main.329
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,028
inproceedings
liu-etal-2022-testing
Testing the Ability of Language Models to Interpret Figurative Language
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.330/
Liu, Emmy and Cui, Chenxuan and Zheng, Kenneth and Neubig, Graham
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4437--4452
Figurative and metaphorical language are commonplace in discourse, and figurative expressions play an important role in communication and cognition. However, figurative language has been a relatively under-studied area in NLP, and it remains an open question to what extent modern language models can interpret nonliteral phrases. To address this question, we introduce Fig-QA, a Winograd-style nonliteral language understanding task consisting of correctly interpreting paired figurative phrases with divergent meanings. We evaluate the performance of several state-of-the-art language models on this task, and find that although language models achieve performance significantly over chance, they still fall short of human performance, particularly in zero- or few-shot settings. This suggests that further work is needed to improve the nonliteral reasoning capabilities of language models.
null
null
10.18653/v1/2022.naacl-main.330
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,029
inproceedings
mysore-etal-2022-multi
Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.331/
Mysore, Sheshera and Cohan, Arman and Hope, Tom
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4453--4470
We present a new scientific document similarity model based on matching fine-grained aspects of texts. To train our model, we exploit a naturally-occurring source of supervision: sentences in the full-text of papers that cite multiple papers together (co-citations). Such co-citations not only reflect close paper relatedness, but also provide textual descriptions of how the co-cited papers are related. This novel form of textual supervision is used for learning to match aspects across papers. We develop multi-vector representations where vectors correspond to sentence-level aspects of documents, and present two methods for aspect matching: (1) A fast method that only matches single aspects, and (2) a method that makes sparse multiple matches with an Optimal Transport mechanism that computes an Earth Mover`s Distance between aspects. Our approach improves performance on document similarity tasks in four datasets. Further, our fast single-match method achieves competitive results, paving the way for applying fine-grained similarity to large scientific corpora.
null
null
10.18653/v1/2022.naacl-main.331
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,030
inproceedings
verma-etal-2022-chai
{CHAI}: A {CH}atbot {AI} for Task-Oriented Dialogue with Offline Reinforcement Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.332/
Verma, Siddharth and Fu, Justin and Yang, Sherry and Levine, Sergey
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4471--4491
Conventionally, generation of natural language for dialogue agents may be viewed as a statistical learning problem: determine the patterns in human-provided data and generate appropriate responses with similar statistical properties. However, dialogue can also be regarded as a goal directed process, where speakers attempt to accomplish a specific task. Reinforcement learning (RL) algorithms are designed specifically for solving such goal-directed problems, but the most direct way to apply RL, through trial-and-error learning in human conversations, is costly. In this paper, we study how offline reinforcement learning can instead be used to train dialogue agents entirely using static datasets collected from human speakers. Our experiments show that recently developed offline RL methods can be combined with language models to yield realistic dialogue agents that better accomplish task goals.
null
null
10.18653/v1/2022.naacl-main.332
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,031
inproceedings
zhao-etal-2022-connecting
Connecting the Dots between Audio and Text without Parallel Data through Visual Knowledge Transfer
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.333/
Zhao, Yanpeng and Hessel, Jack and Yu, Youngjae and Lu, Ximing and Zellers, Rowan and Choi, Yejin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
4492--4507
Machines that can represent and describe environmental soundscapes have practical potential, e.g., for audio tagging and captioning. Prevailing learning paradigms of audio-text connections have been relying on parallel audio-text data, which is, however, scarcely available on the web. We propose VIP-ANT that induces Audio-Text alignment without using any parallel audio-text data. Our key idea is to share the image modality between bi-modal image-text representations and bi-modal image-audio representations; the image modality functions as a pivot and connects audio and text in a tri-modal embedding space implicitly. In a difficult zero-shot setting with no paired audio-text data, our model demonstrates state-of-the-art zero-shot performance on the ESC50 and US8K audio classification tasks, and even surpasses the supervised state of the art for Clotho caption retrieval (with audio queries) by 2.2{\%} R@1. We further investigate cases of minimal audio-text supervision, finding that, e.g., just a few hundred supervised audio-text pairs increase the zero-shot audio classification accuracy by 8{\%} on US8K. However, to match human parity on some zero-shot tasks, our empirical scaling experiments suggest that we would need about $2^{21} \approx 2\text{M}$ supervised audio-caption pairs. Our work opens up new avenues for learning audio-text connections with little to no parallel audio-text data.
null
null
10.18653/v1/2022.naacl-main.333
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,032