entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
kumar-etal-2022-intent
Intent Detection and Discovery from User Logs via Deep Semi-Supervised Contrastive Clustering
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.134/
Kumar, Rajat and Patidar, Mayur and Varshney, Vaibhav and Vig, Lovekesh and Shroff, Gautam
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1836--1853
Intent Detection is a crucial component of Dialogue Systems wherein the objective is to classify a user utterance into one of multiple pre-defined intents. A pre-requisite for developing an effective intent identifier is a training dataset labeled with all possible user intents. However, even skilled domain experts are often unable to foresee all possible user intents at design time and for practical applications, novel intents may have to be inferred incrementally on-the-fly from user utterances. Therefore, for any real-world dialogue system, the number of intents increases over time and new intents have to be discovered by analyzing the utterances outside the existing set of intents. In this paper, our objective is to i) detect known intent utterances from a large number of unlabeled utterance samples given a few labeled samples and ii) discover new unknown intents from the remaining unlabeled samples. Existing SOTA approaches address this problem via alternate representation learning and clustering wherein pseudo labels are used for updating the representations and clustering is used for generating the pseudo labels. Unlike existing approaches that rely on epoch wise cluster alignment, we propose an end-to-end deep contrastive clustering algorithm that jointly updates model parameters and cluster centers via supervised and self-supervised learning and optimally utilizes both labeled and unlabeled data. Our proposed approach outperforms competitive baselines on five public datasets for both settings: (i) where the number of undiscovered intents are known in advance, and (ii) where the number of intents are estimated by an algorithm. We also propose a human-in-the-loop variant of our approach for practical deployment which does not require an estimate of new intents and outperforms the end-to-end approach.
null
null
10.18653/v1/2022.naacl-main.134
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,833
inproceedings
brook-weiss-etal-2022-extending
Extending Multi-Text Sentence Fusion Resources via Pyramid Annotations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.135/
Brook Weiss, Daniela and Roit, Paul and Ernst, Ori and Dagan, Ido
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1854--1860
NLP models that process multiple texts often struggle in recognizing corresponding and salient information that is often differently phrased, and consolidating the redundancies across texts. To facilitate research of such challenges, the sentence fusion task was proposed, yet previous datasets for this task were very limited in their size and scope. In this paper, we revisit and substantially extend previous dataset creation efforts. With careful modifications, relabeling, and employing complementing data sources, we were able to more than triple the size of a notable earlier dataset. Moreover, we show that our extended version uses more representative texts for multi-document tasks and provides a more diverse training set, which substantially improves model performance.
null
null
10.18653/v1/2022.naacl-main.135
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,834
inproceedings
domhan-etal-2022-devil
The Devil is in the Details: On the Pitfalls of Vocabulary Selection in Neural Machine Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.136/
Domhan, Tobias and Hasler, Eva and Tran, Ke and Trenous, Sony and Byrne, Bill and Hieber, Felix
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1861--1874
Vocabulary selection, or lexical shortlisting, is a well-known technique to improve latency of Neural Machine Translation models by constraining the set of allowed output words during inference. The chosen set is typically determined by separately trained alignment model parameters, independent of the source-sentence context at inference time. While vocabulary selection appears competitive with respect to automatic quality metrics in prior work, we show that it can fail to select the right set of output words, particularly for semantically non-compositional linguistic phenomena such as idiomatic expressions, leading to reduced translation quality as perceived by humans. Trading off latency for quality by increasing the size of the allowed set is often not an option in real-world scenarios. We propose a model of vocabulary selection, integrated into the neural translation model, that predicts the set of allowed output words from contextualized encoder representations. This restores translation quality of an unconstrained system, as measured by human evaluations on WMT newstest2020 and idiomatic expressions, at an inference latency competitive with alignment-based selection using aggressive thresholds, thereby removing the dependency on separately trained alignment models.
null
null
10.18653/v1/2022.naacl-main.136
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,835
inproceedings
lauscher-etal-2022-multicite
{M}ulti{C}ite: Modeling realistic citations requires moving beyond the single-sentence single-label setting
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.137/
Lauscher, Anne and Ko, Brandon and Kuehl, Bailey and Johnson, Sophie and Cohan, Arman and Jurgens, David and Lo, Kyle
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1875--1889
Citation context analysis (CCA) is an important task in natural language processing that studies how and why scholars discuss each others' work. Despite decades of study, computational methods for CCA have largely relied on overly-simplistic assumptions of how authors cite, which ignore several important phenomena. For instance, scholarly papers often contain rich discussions of cited work that span multiple sentences and express multiple intents concurrently. Yet, recent work in CCA is often approached as a single-sentence, single-label classification task, and thus many datasets used to develop modern computational approaches fail to capture this interesting discourse. To address this research gap, we highlight three understudied phenomena for CCA and release MULTICITE, a new dataset of 12.6K citation contexts from 1.2K computational linguistics papers that fully models these phenomena. Not only is it the largest collection of expert-annotated citation contexts to-date, MULTICITE contains multi-sentence, multi-label citation contexts annotated through-out entire full paper texts. We demonstrate how MULTICITE can enable the development of new computational methods on three important CCA tasks. We release our code and dataset at \url{https://github.com/allenai/multicite}.
null
null
10.18653/v1/2022.naacl-main.137
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,836
inproceedings
hsu-etal-2022-degree
{DEGREE}: A Data-Efficient Generation-Based Event Extraction Model
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.138/
Hsu, I-Hung and Huang, Kuan-Hao and Boschee, Elizabeth and Miller, Scott and Natarajan, Prem and Chang, Kai-Wei and Peng, Nanyun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1890--1908
Event extraction requires high-quality expert human annotations, which are usually expensive. Therefore, learning a data-efficient event extraction model that can be trained with only a few labeled examples has become a crucial challenge. In this paper, we focus on low-resource end-to-end event extraction and propose DEGREE, a data-efficient model that formulates event extraction as a conditional generation problem. Given a passage and a manually designed prompt, DEGREE learns to summarize the events mentioned in the passage into a natural sentence that follows a predefined pattern. The final event predictions are then extracted from the generated sentence with a deterministic algorithm. DEGREE has three advantages to learn well with less training data. First, our designed prompts provide semantic guidance for DEGREE to leverage DEGREE and thus better capture the event arguments. Moreover, DEGREE is capable of using additional weakly-supervised information, such as the description of events encoded in the prompts. Finally, DEGREE learns triggers and arguments jointly in an end-to-end manner, which encourages the model to better utilize the shared knowledge and dependencies among them. Our experimental results demonstrate the strong performance of DEGREE for low-resource event extraction.
null
null
10.18653/v1/2022.naacl-main.138
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,837
inproceedings
chen-etal-2022-bridging
Bridging the Gap between Language Models and Cross-Lingual Sequence Labeling
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.139/
Chen, Nuo and Shou, Linjun and Gong, Ming and Pei, Jian and Jiang, Daxin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1909--1923
Large-scale cross-lingual pre-trained language models (xPLMs) have shown effective in cross-lingual sequence labeling tasks (xSL), such as machine reading comprehension (xMRC) by transferring knowledge from a high-resource language to low-resource languages. Despite the great success, we draw an empirical observation that there is an training objective gap between pre-training and fine-tuning stages: e.g., mask language modeling objective requires \textit{local} understanding of the masked token and the span-extraction objective requires understanding and reasoning of the \textit{global} input passage/paragraph and question, leading to the discrepancy between pre-training and xMRC. In this paper, we first design a pre-training task tailored for xSL named Cross-lingual Language Informative Span Masking (CLISM) to eliminate the objective gap in a self-supervised manner. Second, we present ContrAstive-Consistency Regularization (CACR), which utilizes contrastive learning to encourage the consistency between representations of input parallel sequences via unsupervised cross-lingual instance-wise training signals during pre-training. By these means, our methods not only bridge the gap between pretrain-finetune, but also enhance PLMs to better capture the alignment between different languages. Extensive experiments prove that our method achieves clearly superior results on multiple xSL benchmarks with limited pre-training data. Our methods also surpass the previous state-of-the-art methods by a large margin in few-shot data setting, where only a few hundred training examples are available.
null
null
10.18653/v1/2022.naacl-main.139
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,838
inproceedings
hu-etal-2022-hero
Hero-Gang Neural Model For Named Entity Recognition
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.140/
Hu, Jinpeng and Shen, Yaling and Liu, Yang and Wan, Xiang and Chang, Tsung-Hui
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1924--1936
Named entity recognition (NER) is a fundamental and important task in NLP, aiming at identifying named entities (NEs) from free text. Recently, since the multi-head attention mechanism applied in the Transformer model can effectively capture longer contextual information, Transformer-based models have become the mainstream methods and have achieved significant performance in this task. Unfortunately, although these models can capture effective global context information, they are still limited in the local feature and position information extraction, which is critical in NER. In this paper, to address this limitation, we propose a novel Hero-Gang Neural structure (HGN), including the Hero and Gang module, to leverage both global and local information to promote NER. Specifically, the Hero module is composed of a Transformer-based encoder to maintain the advantage of the self-attention mechanism, and the Gang module utilizes a multi-window recurrent module to extract local features and position information under the guidance of the Hero module. Afterward, the proposed multi-window attention effectively combines global information and multiple local features for predicting entity labels. Experimental results on several benchmark datasets demonstrate the effectiveness of our proposed model.
null
null
10.18653/v1/2022.naacl-main.140
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,839
inproceedings
zhang-etal-2022-mgimn
{MGIMN}: Multi-Grained Interactive Matching Network for Few-shot Text Classification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.141/
Zhang, Jianhai and Maimaiti, Mieradilijiang and Xing, Gao and Zheng, Yuanhang and Zhang, Ji
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1937--1946
Text classification struggles to generalize to unseen classes with very few labeled text instances per class. In such a few-shot learning (FSL) setting, metric-based meta-learning approaches have shown promising results. Previous studies mainly aim to derive a prototype representation for each class. However, they neglect that it is challenging-yet-unnecessary to construct a compact representation which expresses the entire meaning for each class. They also ignore the importance to capture the inter-dependency between query and the support set for few-shot text classification. To deal with these issues, we propose a meta-learning based method MGIMN which performs instance-wise comparison followed by aggregation to generate class-wise matching vectors instead of prototype learning. The key of instance-wise comparison is the interactive matching within the class-specific context and episode-specific context. Extensive experiments demonstrate that the proposed method significantly outperforms the existing SOTA approaches, under both the standard FSL and generalized FSL settings.
null
null
10.18653/v1/2022.naacl-main.141
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,840
inproceedings
changpinyo-etal-2022-may
All You May Need for {VQA} are Image Captions
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.142/
Changpinyo, Soravit and Kukliansy, Doron and Szpektor, Idan and Chen, Xi and Ding, Nan and Soricut, Radu
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1947--1963
Visual Question Answering (VQA) has benefited from increasingly sophisticated models, but has not enjoyed the same level of engagement in terms of data creation. In this paper, we propose a method that automatically derives VQA examples at volume, by leveraging the abundance of existing image-caption annotations combined with neural models for textual question generation. We show that the resulting data is of high-quality. VQA models trained on our data improve state-of-the-art zero-shot accuracy by double digits and achieve a level of robustness that lacks in the same model trained on human-annotated VQA data.
null
null
10.18653/v1/2022.naacl-main.142
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,841
inproceedings
qorib-etal-2022-frustratingly
Frustratingly Easy System Combination for Grammatical Error Correction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.143/
Qorib, Muhammad Reza and Na, Seung-Hoon and Ng, Hwee Tou
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1964--1974
In this paper, we formulate system combination for grammatical error correction (GEC) as a simple machine learning task: binary classification. We demonstrate that with the right problem formulation, a simple logistic regression algorithm can be highly effective for combining GEC models. Our method successfully increases the F0.5 score from the highest base GEC system by 4.2 points on the CoNLL-2014 test set and 7.2 points on the BEA-2019 test set. Furthermore, our method outperforms the state of the art by 4.0 points on the BEA-2019 test set, 1.2 points on the CoNLL-2014 test set with original annotation, and 3.4 points on the CoNLL-2014 test set with alternative annotation. We also show that our system combination generates better corrections with higher F0.5 scores than the conventional ensemble.
null
null
10.18653/v1/2022.naacl-main.143
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,842
inproceedings
xiong-etal-2022-simple
Simple Local Attentions Remain Competitive for Long-Context Tasks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.144/
Xiong, Wenhan and Oguz, Barlas and Gupta, Anchit and Chen, Xilun and Liskovich, Diana and Levy, Omer and Yih, Scott and Mehdad, Yashar
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1975--1986
Many NLP tasks require processing long contexts beyond the length limit of pretrained models. In order to scale these models to longer text sequences, many efficient long-range attention variants have been proposed. Despite the abundance of research along this direction, it is still difficult to gauge the relative effectiveness of these models in practical use cases, e.g., if we apply these models following the pretrain-and-finetune paradigm. In this work, we aim to conduct a thorough analysis of these emerging models with large-scale and controlled experiments. For each attention variant, we pretrain large-size models using the same long-doc corpus and then finetune these models for real-world long-context tasks. Our findings reveal pitfalls of an existing widely-used long-range benchmark and show none of the tested efficient attentions can beat a simple local window attention under standard pretraining paradigms. Further analysis on local attention variants suggests that even the commonly used attention-window overlap is not necessary to achieve good downstream results {---} using disjoint local attentions, we are able to build a simpler and more efficient long-doc QA model that matches the performance of Longformer with half of its pretraining compute.
null
null
10.18653/v1/2022.naacl-main.144
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,843
inproceedings
chen-etal-2022-even
{E}ven the Simplest Baseline Needs Careful Re-investigation: A Case Study on {XML}-{CNN}
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.145/
Chen, Si-An and Liu, Jie-jyun and Yang, Tsung-Han and Lin, Hsuan-Tien and Lin, Chih-Jen
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
1987--2000
The power and the potential of deep learning models attract many researchers to design advanced and sophisticated architectures. Nevertheless, the progress is sometimes unreal due to various possible reasons. In this work, through an astonishing example we argue that more efforts should be paid to ensure the progress in developing a new deep learning method. For a highly influential multi-label text classification method XML-CNN, we show that the superior performance claimed in the original paper was mainly due to some unbelievable coincidences. We re-examine XML-CNN and make a re-implementation which reveals some contradictory findings to the claims in the original paper. Our study suggests suitable baselines for multi-label text classification tasks and confirms that the progress on a new architecture cannot be confidently justified without a cautious investigation.
null
null
10.18653/v1/2022.naacl-main.145
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,844
inproceedings
agarwal-etal-2022-multi
Multi-Relational Graph Transformer for Automatic Short Answer Grading
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.146/
Agarwal, Rajat and Khurana, Varun and Grover, Karish and Mohania, Mukesh and Goyal, Vikram
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2001--2012
The recent transition to the online educational domain has increased the need for Automatic Short Answer Grading (ASAG). ASAG automatically evaluates a student`s response against a (given) correct response and thus has been a prevalent semantic matching task. Most existing methods utilize sequential context to compare two sentences and ignore the structural context of the sentence; therefore, these methods may not result in the desired performance. In this paper, we overcome this problem by proposing a Multi-Relational Graph Transformer, MitiGaTe, to prepare token representations considering the structural context. Abstract Meaning Representation (AMR) graph is created by parsing the text response and then segregated into multiple subgraphs, each corresponding to a particular relationship in AMR. A Graph Transformer is used to prepare relation-specific token embeddings within each subgraph, then aggregated to obtain a subgraph representation. Finally, we compare the correct answer and the student response subgraph representations to yield a final score. Experimental results on Mohler`s dataset show that our system outperforms the existing state-of-the-art methods. We have released our implementation \url{https://github.com/kvarun07/asag-gt}, as we believe that our model can be useful for many future applications.
null
null
10.18653/v1/2022.naacl-main.146
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,845
inproceedings
jin-etal-2022-event
Event Schema Induction with Double Graph Autoencoders
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.147/
Jin, Xiaomeng and Li, Manling and Ji, Heng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2013--2025
Event schema depicts the typical structure of complex events, serving as a scaffolding to effectively analyze, predict, and possibly intervene in the ongoing events. To induce event schemas from historical events, previous work uses an event-by-event scheme, ignoring the global structure of the entire schema graph. We propose a new event schema induction framework using double graph autoencoders, which captures the global dependencies among nodes in event graphs. Specifically, we first extract the event skeleton from an event graph and design a variational directed acyclic graph (DAG) autoencoder to learn its global structure. Then we further fill in the event arguments for the skeleton, and use another Graph Convolutional Network (GCN) based autoencoder to reconstruct entity-entity relations as well as to detect coreferential entities. By performing this two-stage induction decomposition, the model can avoid reconstructing the entire graph in one step, allowing it to focus on learning global structures between events. Experimental results on three event graph datasets demonstrate that our method achieves state-of-the-art performance and induces high-quality event schemas with global consistency.
null
null
10.18653/v1/2022.naacl-main.147
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,846
inproceedings
lee-etal-2022-cs1qa
{CS}1{QA}: A Dataset for Assisting Code-based Question Answering in an Introductory Programming Course
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.148/
Lee, Changyoon and Seonwoo, Yeon and Oh, Alice
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2026--2040
We introduce CS1QA, a dataset for code-based question answering in the programming education domain. CS1QA consists of 9,237 question-answer pairs gathered from chat logs in an introductory programming class using Python, and 17,698 unannotated chat data with code. Each question is accompanied with the student`s code, and the portion of the code relevant to answering the question. We carefully design the annotation process to construct CS1QA, and analyze the collected dataset in detail. The tasks for CS1QA are to predict the question type, the relevant code snippet given the question and the code and retrieving an answer from the annotated corpus. Results for the experiments on several baseline models are reported and thoroughly analyzed. The tasks for CS1QA challenge models to understand both the code and natural language. This unique dataset can be used as a benchmark for source code comprehension and question answering in the educational setting.
null
null
10.18653/v1/2022.naacl-main.148
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,847
inproceedings
kurniawan-etal-2022-unsupervised
Unsupervised Cross-Lingual Transfer of Structured Predictors without Source Data
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.149/
Kurniawan, Kemal and Frermann, Lea and Schulz, Philip and Cohn, Trevor
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2041--2054
Providing technologies to communities or domains where training data is scarce or protected e.g., for privacy reasons, is becoming increasingly important. To that end, we generalise methods for unsupervised transfer from multiple input models for structured prediction. We show that the means of aggregating over the input models is critical, and that multiplying marginal probabilities of substructures to obtain high-probability structures for distant supervision is substantially better than taking the union of such structures over the input models, as done in prior work. Testing on 18 languages, we demonstrate that the method works in a cross-lingual setting, considering both dependency parsing and part-of-speech structured prediction problems. Our analyses show that the proposed method produces less noisy labels for the distant supervision.
null
null
10.18653/v1/2022.naacl-main.149
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,848
inproceedings
liu-etal-2022-dont
Don`t Take It Literally: An Edit-Invariant Sequence Loss for Text Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.150/
Liu, Guangyi and Yang, Zichao and Tao, Tianhua and Liang, Xiaodan and Bao, Junwei and Li, Zhen and He, Xiaodong and Cui, Shuguang and Hu, Zhiting
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2055--2078
Neural text generation models are typically trained by maximizing log-likelihood with the sequence cross entropy (CE) loss, which encourages an exact token-by-token match between a target sequence with a generated sequence. Such training objective is sub-optimal when the target sequence is not perfect, e.g., when the target sequence is corrupted with noises, or when only weak sequence supervision is available. To address the challenge, we propose a novel Edit-Invariant Sequence Loss (EISL), which computes the matching loss of a target $n$-gram with all $n$-grams in the generated sequence. EISL is designed to be robust to various noises and edits in the target sequences. Moreover, the EISL computation is essentially an approximate convolution operation with target $n$-grams as kernels, which is easy to implement and efficient to compute with existing libraries. To demonstrate the effectiveness of EISL, we conduct experiments on a wide range of tasks, including machine translation with noisy target sequences, unsupervised text style transfer with only weak training signals, and non-autoregressive generation with non-predefined generation order. Experimental results show our method significantly outperforms the common CE loss and other strong baselines on all the tasks. EISL has a simple API that can be used as a drop-in replacement of the CE loss: \url{https://github.com/guangyliu/EISL}.
null
null
10.18653/v1/2022.naacl-main.150
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,849
inproceedings
wang-etal-2022-modeling
Modeling Exemplification in Long-form Question Answering via Retrieval
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.151/
Wang, Shufan and Xu, Fangyuan and Thompson, Laure and Choi, Eunsol and Iyyer, Mohit
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2079--2092
Exemplification is a process by which writers explain or clarify a concept by providing an example. While common in all forms of writing, exemplification is particularly useful in the task of long-form question answering (LFQA), where a complicated answer can be made more understandable through simple examples. In this paper, we provide the first computational study of exemplification in QA, performing a fine-grained annotation of different types of examples (e.g., hypotheticals, anecdotes) in three corpora. We show that not only do state-of-the-art LFQA models struggle to generate relevant examples, but also that standard evaluation metrics such as ROUGE are insufficient to judge exemplification quality. We propose to treat exemplification as a \textit{retrieval} problem in which a partially-written answer is used to query a large set of human-written examples extracted from a corpus. Our approach allows a reliable ranking-type automatic metrics that correlates well with human evaluation. A human evaluation shows that our model`s retrieved examples are more relevant than examples generated from a state-of-the-art LFQA model.
null
null
10.18653/v1/2022.naacl-main.151
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,850
inproceedings
yilmaz-toraman-2022-d2u
{D}2{U}: Distance-to-Uniform Learning for Out-of-Scope Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.152/
Yilmaz, Eyup and Toraman, Cagri
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2093--2108
Supervised training with cross-entropy loss implicitly forces models to produce probability distributions that follow a discrete delta distribution. Model predictions in test time are expected to be similar to delta distributions if the classifier determines the class of an input correctly. However, the shape of the predicted probability distribution can become similar to the uniform distribution when the model cannot infer properly. We exploit this observation for detecting out-of-scope (OOS) utterances in conversational systems. Specifically, we propose a zero-shot post-processing step, called Distance-to-Uniform (D2U), exploiting not only the classification confidence score, but the shape of the entire output distribution. We later combine it with a learning procedure that uses D2U for loss calculation in the supervised setup. We conduct experiments using six publicly available datasets. Experimental results show that the performance of OOS detection is improved with our post-processing when there is no OOS training data, as well as with D2U learning procedure when OOS training data is available.
null
null
10.18653/v1/2022.naacl-main.152
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,851
inproceedings
liu-etal-2022-reference
Reference-free Summarization Evaluation via Semantic Correlation and Compression Ratio
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.153/
Liu, Yizhu and Jia, Qi and Zhu, Kenny
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2109--2115
A document can be summarized in a number of ways. Reference-based evaluation of summarization has been criticized for its inflexibility. The more sufficient the number of abstracts, the more accurate the evaluation results. However, it is difficult to collect sufficient reference summaries. In this paper, we propose a new automatic reference-free evaluation metric that compares semantic distribution between source document and summary by pretrained language models and considers summary compression ratio. The experiments show that this metric is more consistent with human evaluation in terms of coherence, consistency, relevance and fluency.
null
null
10.18653/v1/2022.naacl-main.153
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,852
inproceedings
tahaei-etal-2022-kroneckerbert
{K}ronecker{BERT}: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.154/
Tahaei, Marzieh and Charlaix, Ella and Nia, Vahid and Ghodsi, Ali and Rezagholizadeh, Mehdi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2116--2127
The development of over-parameterized pre-trained language models has made a significant contribution toward the success of natural language processing. While over-parameterization of these models is the key to their generalization power, it makes them unsuitable for deployment on low-capacity devices. We push the limits of state-of-the-art Transformer-based pre-trained language model compression using Kronecker decomposition. We present our KroneckerBERT, a compressed version of the BERT{\_}BASE model obtained by compressing the embedding layer and the linear mappings in the multi-head attention, and the feed-forward network modules in the Transformer layers. Our KroneckerBERT is trained via a very efficient two-stage knowledge distillation scheme using far fewer data samples than state-of-the-art models like MobileBERT and TinyBERT. We evaluate the performance of KroneckerBERT on well-known NLP benchmarks. We show that our KroneckerBERT with compression factors of 7.7x and 21x outperforms state-of-the-art compression methods on the GLUE and SQuAD benchmarks. In particular, using only 13{\%} of the teacher model parameters, it retain more than 99{\%} of the accuracy on the majority of GLUE tasks.
null
null
10.18653/v1/2022.naacl-main.154
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,853
inproceedings
bae-etal-2022-building
Building a Role Specified Open-Domain Dialogue System Leveraging Large-Scale Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.155/
Bae, Sanghwan and Kwak, Donghyun and Kim, Sungdong and Ham, Donghoon and Kang, Soyoung and Lee, Sang-Woo and Park, Woomyoung
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2128--2150
Recent open-domain dialogue models have brought numerous breakthroughs. However, building a chat system is not scalable since it often requires a considerable volume of human-human dialogue data, especially when enforcing features such as persona, style, or safety. In this work, we study the challenge of imposing roles on open-domain dialogue systems, with the goal of making the systems maintain consistent roles while conversing naturally with humans. To accomplish this, the system must satisfy a role specification that includes certain conditions on the stated features as well as a system policy on whether or not certain types of utterances are allowed. For this, we propose an efficient data collection framework leveraging in-context few-shot learning of large-scale language models for building role-satisfying dialogue dataset from scratch. We then compare various architectures for open-domain dialogue systems in terms of meeting role specifications while maintaining conversational abilities. Automatic and human evaluations show that our models return few out-of-bounds utterances, keeping competitive performance on general metrics. We release a Korean dialogue dataset we built for further research.
null
null
10.18653/v1/2022.naacl-main.155
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,854
inproceedings
wang-wang-2022-sentence
Sentence-Level Resampling for Named Entity Recognition
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.156/
Wang, Xiaochen and Wang, Yue
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2151--2165
As a fundamental task in natural language processing, named entity recognition (NER) aims to locate and classify named entities in unstructured text. However, named entities are always the minority among all tokens in the text. This data imbalance problem presents a challenge to machine learning models as their learning objective is usually dominated by the majority of non-entity tokens. To alleviate data imbalance, we propose a set of sentence-level resampling methods where the importance of each training sentence is computed based on its tokens and entities. We study the generalizability of these resampling methods on a wide variety of NER models (CRF, Bi-LSTM, and BERT) across corpora from diverse domains (general, social, and medical texts). Extensive experiments show that the proposed methods improve span-level macro F1-scores of the evaluated NER models on multiple corpora, frequently outperforming sub-sentence-level resampling, data augmentation, and special loss functions such as focal and Dice loss.
null
null
10.18653/v1/2022.naacl-main.156
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,855
inproceedings
sato-2022-word
Word Tour: One-dimensional Word Embeddings via the Traveling Salesman Problem
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.157/
Sato, Ryoma
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2166--2172
Word embeddings are one of the most fundamental technologies used in natural language processing. Existing word embeddings are high-dimensional and consume considerable computational resources. In this study, we propose WordTour, unsupervised one-dimensional word embeddings. To achieve the challenging goal, we propose a decomposition of the desiderata of word embeddings into two parts, completeness and soundness, and focus on soundness in this paper. Owing to the single dimensionality, WordTour is extremely efficient and provides a minimal means to handle word embeddings. We experimentally confirmed the effectiveness of the proposed method via user study and document classification.
null
null
10.18653/v1/2022.naacl-main.157
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,856
inproceedings
tan-2022-diversity
On the Diversity and Limits of Human Explanations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.158/
Tan, Chenhao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2173--2188
A growing effort in NLP aims to build datasets of human explanations. However, it remains unclear whether these datasets serve their intended goals. This problem is exacerbated by the fact that the term explanation is overloaded and refers to a broad range of notions with different properties and ramifications. Our goal is to provide an overview of the diversity of explanations, discuss human limitations in providing explanations, and ultimately provide implications for collecting and using human explanations in NLP.Inspired by prior work in psychology and cognitive sciences, we group existing human explanations in NLP into three categories: proximal mechanism, evidence, and procedure. These three types differ in nature and have implications for the resultant explanations. For instance, procedure is not considered explanation in psychology and connects with a rich body of work on learning from instructions. The diversity of explanations is further evidenced by proxy questions that are needed for annotators to interpret and answer {\textquotedblleft}why is [input] assigned [label]{\textquotedblright}. Finally, giving explanations may require different, often deeper, understandings than predictions, which casts doubt on whether humans can provide valid explanations in some tasks.
null
null
10.18653/v1/2022.naacl-main.158
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,857
inproceedings
zhang-etal-2022-locally
Locally Aggregated Feature Attribution on Natural Language Model Understanding
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.159/
Zhang, Sheng and Wang, Jin and Jiang, Haitao and Song, Rui
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2189--2201
With the growing popularity of deep-learning models, model understanding becomes more important. Much effort has been devoted to demystify deep neural networks for better explainability. Some feature attribution methods have shown promising results in computer vision, especially the gradient-based methods where effectively smoothing the gradients with reference data is the key to a robust and faithful result. However, direct application of these gradient-based methods to NLP tasks is not trivial due to the fact that the input consists of discrete tokens and the {\textquotedblleft}reference{\textquotedblright} tokens are not explicitly defined. In this work, we propose Locally Aggregated Feature Attribution (LAFA), a novel gradient-based feature attribution method for NLP models. Instead of relying on obscure reference tokens, it smooths gradients by aggregating similar reference texts derived from language model embeddings. For evaluation purpose, we also design experiments on different NLP tasks including Entity Recognition and Sentiment Analysis on public datasets and key words detection on constructed Amazon catalogue dataset. The superior performance of the proposed method is demonstrated through experiments.
null
null
10.18653/v1/2022.naacl-main.159
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,858
inproceedings
vakil-amiri-2022-generic
Generic and Trend-aware Curriculum Learning for Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.160/
Vakil, Nidhi and Amiri, Hadi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2202--2213
We present a generic and trend-aware curriculum learning approach that effectively integrates textual and structural information in text graphs for relation extraction between entities, which we consider as node pairs in graphs. The proposed model extends existing curriculum learning approaches by incorporating sample-level loss trends to better discriminate easier from harder samples and schedule them for training. The model results in a robust estimation of sample difficulty and shows sizable improvement over the state-of-the-art approaches across several datasets.
null
null
10.18653/v1/2022.naacl-main.160
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,859
inproceedings
marchisio-etal-2022-systematic
On Systematic Style Differences between Unsupervised and Supervised {MT} and an Application for High-Resource Machine Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.161/
Marchisio, Kelly and Freitag, Markus and Grangier, David
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2214--2225
Modern unsupervised machine translation (MT) systems reach reasonable translation quality under clean and controlled data conditions. As the performance gap between supervised and unsupervised MT narrows, it is interesting to ask whether the different training methods result in systematically different output beyond what is visible via quality metrics like adequacy or BLEU. We compare translations from supervised and unsupervised MT systems of similar quality, finding that unsupervised output is more fluent and more structurally different in comparison to human translation than is supervised MT. We then demonstrate a way to combine the benefits of both methods into a single system which results in improved adequacy and fluency as rated by human evaluators. Our results open the door to interesting discussions about how supervised and unsupervised MT might be different yet mutually-beneficial.
null
null
10.18653/v1/2022.naacl-main.161
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,860
inproceedings
asai-etal-2022-evidentiality
Evidentiality-guided Generation for Knowledge-Intensive {NLP} Tasks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.162/
Asai, Akari and Gardner, Matt and Hajishirzi, Hannaneh
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2226--2243
Retrieval-augmented generation models have shown state-of-the-art performance across many knowledge-intensive NLP tasks such as open-domain question answering and fact verification. These models are trained to generate a final output given retrieved passages that can be irrelevant to an input query, leading to learning spurious cues or memorization. This work introduces a method to incorporate \textit{evidentiality} of passages{---}whether a passage contains correct evidence to support the output{---}into training the generator. We introduce a multi-task learning framework to jointly generate the final output and predict the \textit{evidentiality} of each passage. Furthermore, we introduce a new task-agnostic method for obtaining high-quality \textit{silver} evidentiality labels, addressing the issues of gold evidentiality labels being unavailable in most domains. Our experiments on five datasets across three knowledge-intensive tasks show that our new evidentiality-guided generator significantly outperforms its direct counterpart on all of them, and advances the state of the art on three of them. Our analysis shows that multi-task learning and silver evidentiality mining play key roles. Our code is available at \url{https://github.com/AkariAsai/evidentiality_qa}
null
null
10.18653/v1/2022.naacl-main.162
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,861
inproceedings
kim-etal-2022-modularized
Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.163/
Kim, Yu Jin and Kwak, Beong-woo and Kim, Youngwook and Amplayo, Reinald Kim and Hwang, Seung-won and Yeo, Jinyoung
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2244--2257
Commonsense reasoning systems should be able to generalize to diverse reasoning cases. However, most state-of-the-art approaches depend on expensive data annotations and overfit to a specific benchmark without learning how to perform general semantic reasoning. To overcome these drawbacks, zero-shot QA systems have shown promise as a robust learning scheme by transforming a commonsense knowledge graph (KG) into synthetic QA-form samples for model training. Considering the increasing type of different commonsense KGs, this paper aims to extend the zero-shot transfer learning scenario into multiple-source settings, where different KGs can be utilized synergetically. Towards this goal, we propose to mitigate the loss of knowledge from the interference among the different knowledge sources, by developing a modular variant of the knowledge aggregation as a new zero-shot commonsense reasoning framework. Results on five commonsense reasoning benchmarks demonstrate the efficacy of our framework, improving the performance with multiple KGs.
null
null
10.18653/v1/2022.naacl-main.163
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,862
inproceedings
zhao-etal-2022-learning-express
Learning to Express in Knowledge-Grounded Conversation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.164/
Zhao, Xueliang and Fu, Tingchen and Tao, Chongyang and Wu, Wei and Zhao, Dongyan and Yan, Rui
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2258--2273
Grounding dialogue generation by extra knowledge has shown great potentials towards building a system capable of replying with knowledgeable and engaging responses. Existing studies focus on how to synthesize a response with proper knowledge, yet neglect that the same knowledge could be expressed differently by speakers even under the same context. In this work, we mainly consider two aspects of knowledge expression, namely the structure of the response and style of the content in each part. We therefore introduce two sequential latent variables to represent the structure and the content style respectively. We propose a segmentation-based generation model and optimize the model by a variational approach to discover the underlying pattern of knowledge expression in a response. Evaluation results on two benchmarks indicate that our model can learn the structure style defined by a few examples and generate responses in desired content style.
null
null
10.18653/v1/2022.naacl-main.164
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,863
inproceedings
yu-etal-2022-end
End-to-End {C}hinese Speaker Identification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.165/
Yu, Dian and Zhou, Ben and Yu, Dong
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2274--2285
Speaker identification (SI) in texts aims to identify the speaker(s) for each utterance in texts. Previous studies divide SI into several sub-tasks (e.g., quote extraction, named entity recognition, gender identification, and coreference resolution). However, we are still far from solving these sub-tasks, making SI systems that rely on them seriously suffer from error propagation. End-to-end SI systems, on the other hand, are not limited by individual modules, but suffer from insufficient training data from the existing small-scale datasets. To make large end-to-end models possible, we design a new annotation guideline that regards SI as span extraction from the local context, and we annotate by far the largest SI dataset for Chinese named CSI based on eighteen novels. Viewing SI as a span selection task also introduces the possibility of applying existing storng extractive machine reading comprehension (MRC) baselines. Surprisingly, simply using such a baseline without human-annotated character names and carefully designed rules, we can already achieve performance comparable or better than those of previous state-of-the-art SI methods on all public SI datasets for Chinese. Furthermore, we show that our dataset can serve as additional training data for existing benchmarks, which leads to further gains (up to 6.5{\%} in accuracy). Finally, using CSI as a clean source, we design an effective self-training paradigm to continuously leverage hundreds of unlabeled novels.
null
null
10.18653/v1/2022.naacl-main.165
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,864
inproceedings
pouran-ben-veyseh-etal-2022-minion
{MINION}: a Large-Scale and Diverse Dataset for Multilingual Event Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.166/
Pouran Ben Veyseh, Amir and Nguyen, Minh Van and Dernoncourt, Franck and Nguyen, Thien
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2286--2299
Event Detection (ED) is the task of identifying and classifying trigger words of event mentions in text. Despite considerable research efforts in recent years for English text, the task of ED in other languages has been significantly less explored. Switching to non-English languages, important research questions for ED include how well existing ED models perform on different languages, how challenging ED is in other languages, and how well ED knowledge and annotation can be transferred across languages. To answer those questions, it is crucial to obtain multilingual ED datasets that provide consistent event annotation for multiple languages. There exist some multilingual ED datasets; however, they tend to cover a handful of languages and mainly focus on popular ones. Many languages are not covered in existing multilingual ED datasets. In addition, the current datasets are often small and not accessible to the public. To overcome those shortcomings, we introduce a new large-scale multilingual dataset for ED (called MINION) that consistently annotates events for 8 different languages; 5 of them have not been supported by existing multilingual datasets. We also perform extensive experiments and analysis to demonstrate the challenges and transferability of ED across languages in MINION that in all call for more research effort in this area. We will release the dataset to promote future research on multilingual ED.
null
null
10.18653/v1/2022.naacl-main.166
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,865
inproceedings
webson-pavlick-2022-prompt
Do Prompt-Based Models Really Understand the Meaning of Their Prompts?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.167/
Webson, Albert and Pavlick, Ellie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2300--2344
Recently, a boom of papers has shown extraordinary progress in zero-shot and few-shot learning with various prompt-based models. It is commonly argued that prompts help models to learn faster in the same way that humans learn faster when provided with task instructions expressed in natural language. In this study, we experiment with over 30 prompts manually written for natural language inference (NLI). We find that models can learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading as they do with instructively {\textquotedblleft}good{\textquotedblright} prompts. Further, such patterns hold even for models as large as 175 billion parameters (Brown et al., 2020) as well as the recently proposed instruction-tuned models which are trained on hundreds of prompts (Sanh et al., 2021). That is, instruction-tuned models often produce good predictions with irrelevant and misleading prompts even at zero shots. In sum, notwithstanding prompt-based models' impressive improvement, we find evidence of serious limitations that question the degree to which such improvement is derived from models understanding task instructions in ways analogous to humans' use of task instructions.
null
null
10.18653/v1/2022.naacl-main.167
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,866
inproceedings
wang-etal-2022-gpl
{GPL}: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.168/
Wang, Kexin and Thakur, Nandan and Reimers, Nils and Gurevych, Iryna
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2345--2360
Dense retrieval approaches can overcome the lexical gap and lead to significantly improved search results. However, they require large amounts of training data which is not available for most domains. As shown in previous work (Thakur et al., 2021b), the performance of dense retrievers severely degrades under a domain shift. This limits the usage of dense retrieval approaches to only a few domains with large training datasets. In this paper, we propose the novel unsupervised domain adaptation method \textit{Generative Pseudo Labeling} (GPL), which combines a query generator with pseudo labeling from a cross-encoder. On six representative domain-specialized datasets, we find the proposed GPL can outperform an out-of-the-box state-of-the-art dense retrieval approach by up to 9.3 points nDCG@10. GPL requires less (unlabeled) data from the target domain and is more robust in its training than previous methods. We further investigate the role of six recent pre-training methods in the scenario of domain adaptation for retrieval tasks, where only three could yield improved results. The best approach, TSDAE (Wang et al., 2021) can be combined with GPL, yielding another average improvement of 1.4 points nDCG@10 across the six tasks. The code and the models are available at \url{https://github.com/UKPLab/gpl}.
null
null
10.18653/v1/2022.naacl-main.168
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,867
inproceedings
ye-etal-2022-sparse
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.169/
Ye, Qinyuan and Khabsa, Madian and Lewis, Mike and Wang, Sinong and Ren, Xiang and Jaech, Aaron
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2361--2375
Distilling state-of-the-art transformer models into lightweight student models is an effective way to reduce computation cost at inference time. The student models are typically compact transformers with fewer parameters, while expensive operations such as self-attention persist. Therefore, the improved inference speed may still be unsatisfactory for real-time or high-volume use cases. In this paper, we aim to further push the limit of inference speed by distilling teacher models into bigger, sparser student models {--} bigger in that they scale up to billions of parameters; sparser in that most of the model parameters are n-gram embeddings. Our experiments on six single-sentence text classification tasks show that these student models retain 97{\%} of the RoBERTa-Large teacher performance on average, and meanwhile achieve up to 600x speed-up on both GPUs and CPUs at inference time. Further investigation reveals that our pipeline is also helpful for sentence-pair classification tasks, and in domain generalization settings.
null
null
10.18653/v1/2022.naacl-main.169
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,868
inproceedings
huber-carenini-2022-towards
Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.170/
Huber, Patrick and Carenini, Giuseppe
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2376--2394
In this paper, we extend the line of BERTology work by focusing on the important, yet less explored, alignment of pre-trained and fine-tuned PLMs with large-scale discourse structures. We propose a novel approach to infer discourse information for arbitrarily long documents. In our experiments, we find that the captured discourse information is local and general, even across a collection of fine-tuning tasks. We compare the inferred discourse trees with supervised, distantly supervised and simple baselines to explore the structural overlap, finding that constituency discourse trees align well with supervised models, however, contain complementary discourse information. Lastly, we individually explore self-attention matrices to analyze the information redundancy. We find that similar discourse information is consistently captured in the same heads.
null
null
10.18653/v1/2022.naacl-main.170
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,869
inproceedings
xiao-etal-2022-sais
{SAIS}: Supervising and Augmenting Intermediate Steps for Document-Level Relation Extraction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.171/
Xiao, Yuxin and Zhang, Zecheng and Mao, Yuning and Yang, Carl and Han, Jiawei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2395--2409
Stepping from sentence-level to document-level, the research on relation extraction (RE) confronts increasing text length and more complicated entity interactions. Consequently, it is more challenging to encode the key information sources{---}relevant contexts and entity types. However, existing methods only implicitly learn to model these critical information sources while being trained for RE. As a result, they suffer the problems of ineffective supervision and uninterpretable model predictions. In contrast, we propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for RE. Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately so as to enhance interpretability. By assessing model uncertainty, SAIS further boosts the performance via evidence-based data augmentation and ensemble inference while reducing the computational cost. Eventually, SAIS delivers state-of-the-art RE results on three benchmarks (DocRED, CDR, and GDA) and outperforms the runner-up by 5.04{\%} relatively in F1 score in evidence retrieval on DocRED.
null
null
10.18653/v1/2022.naacl-main.171
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,870
inproceedings
otani-etal-2022-lite
{LITE}: {I}ntent-based Task Representation Learning Using Weak Supervision
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.172/
Otani, Naoki and Gamon, Michael and Jauhar, Sujay Kumar and Yang, Mei and Malireddi, Sri Raghu and Riva, Oriana
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2410--2424
Users write to-dos as personal notes to themselves, about things they need to complete, remember or organize. To-do texts are usually short and under-specified, which poses a challenge for current text representation models. Yet, understanding and representing their meaning is the first step towards providing intelligent assistance for to-do management. We address this problem by proposing a neural multi-task learning framework, LITE, which extracts representations of English to-do tasks with a multi-head attention mechanism on top of a pre-trained text encoder. To adapt representation models to to-do texts, we collect weak-supervision labels from semantically rich external resources (e.g., dynamic commonsense knowledge bases), following the principle that to-do tasks with similar intents have similar labels. We then train the model on multiple generative/predictive training objectives jointly. We evaluate our representation model on four downstream tasks and show that our approach consistently improves performance over baseline models, achieving error reduction of up to 38.7{\%}.
null
null
10.18653/v1/2022.naacl-main.172
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,871
inproceedings
braun-etal-2022-summary
Does Summary Evaluation Survive Translation to Other Languages?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.173/
Braun, Spencer and Vasilyev, Oleg and Iskender, Neslihan and Bohannon, John
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2425--2435
The creation of a quality summarization dataset is an expensive, time-consuming effort, requiring the production and evaluation of summaries by both trained humans and machines. The returns to such an effort would increase significantly if the dataset could be used in additional languages without repeating human annotations. To investigate how much we can trust machine translation of summarization datasets, we translate the English SummEval dataset to seven languages and compare performances across automatic evaluation measures. We explore equivalence testing as the appropriate statistical paradigm for evaluating correlations between human and automated scoring of summaries. We also consider the effect of translation on the relative performance between measures. We find some potential for dataset reuse in languages similar to the source and along particular dimensions of summary quality. Our code and data can be found at \url{https://github.com/PrimerAI/primer-research/}.
null
null
10.18653/v1/2022.naacl-main.173
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,872
inproceedings
saha-etal-2022-shoulder
A Shoulder to Cry on: Towards A Motivational Virtual Assistant for Assuaging Mental Agony
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.174/
Saha, Tulika and Reddy, Saichethan and Das, Anindya and Saha, Sriparna and Bhattacharyya, Pushpak
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2436--2449
Mental Health Disorders continue plaguing humans worldwide. Aggravating this situation is the severe shortage of qualified and competent mental health professionals (MHPs), which underlines the need for developing Virtual Assistants (VAs) that can \textit{assist} MHPs. The data+ML for automation can come from platforms that allow visiting and posting messages in peer-to-peer anonymous manner for sharing their experiences (frequently stigmatized) and seeking support. In this paper, we propose a VA that can act as the first point of contact and comfort for mental health patients. We curate a dataset, Motivational VA: MotiVAte comprising of 7k dyadic conversations collected from a peer-to-peer support platform. The system employs two mechanisms: (i) Mental Illness Classification: an attention based BERT classifier that outputs the mental disorder category out of the 4 categories, viz., Major Depressive Disorder (MDD), Anxiety, Obsessive Compulsive Disorder (OCD) and Post-traumatic Stress Disorder (PTSD), based on the input ongoing dialog between the support seeker and the VA; and (ii) Mental Illness Conditioned Motivational Dialogue Generation (MI-MDG): a sentiment driven Reinforcement Learning (RL) based motivational response generator. The empirical evaluation demonstrates the system capability by way of outperforming several baselines.
null
null
10.18653/v1/2022.naacl-main.174
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,873
inproceedings
bao-etal-2022-suenes
{S}ue{N}es: A Weakly Supervised Approach to Evaluating Single-Document Summarization via Negative Sampling
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.175/
Bao, Forrest and Luo, Ge and Li, Hebi and Qiu, Minghui and Yang, Yinfei and He, Youbiao and Chen, Cen
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2450--2458
Canonical automatic summary evaluation metrics, such as ROUGE, focus on lexical similarity which cannot well capture semantics nor linguistic quality and require a reference summary which is costly to obtain. Recently, there have been a growing number of efforts to alleviate either or both of the two drawbacks. In this paper, we present a proof-of-concept study to a weakly supervised summary evaluation approach without the presence of reference summaries. Massive data in existing summarization datasets are transformed for training by pairing documents with corrupted reference summaries. In cross-domain tests, our strategy outperforms baselines with promising improvements, and show a great advantage in gauging linguistic qualities over all metrics.
null
null
10.18653/v1/2022.naacl-main.175
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,874
inproceedings
berend-2022-combating
Combating the Curse of Multilinguality in Cross-Lingual {WSD} by Aligning Sparse Contextualized Word Representations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.176/
Berend, G{\'a}bor
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2459--2471
In this paper, we advocate for using large pre-trained monolingual language models in cross lingual zero-shot word sense disambiguation (WSD) coupled with a contextualized mapping mechanism. We also report rigorous experiments that illustrate the effectiveness of employing sparse contextualized word representations obtained via a dictionary learning procedure. Our experimental results demonstrate that the above modifications yield a significant improvement of nearly 6.5 points of increase in the average F-score (from 62.0 to 68.5) over a collection of 17 typologically diverse set of target languages. We release our source code for replicating our experiments at \url{https://github.com/begab/sparsity_makes_sense}.
null
null
10.18653/v1/2022.naacl-main.176
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,875
inproceedings
pal-heafield-2022-cheat
Cheat Codes to Quantify Missing Source Information in Neural Machine Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.177/
Pal, Proyag and Heafield, Kenneth
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2472--2477
This paper describes a method to quantify the amount of information $H(t|s)$ added by the target sentence $t$ that is not present in the source $s$ in a neural machine translation system. We do this by providing the model the target sentence in a highly compressed form (a {\textquotedblleft}cheat code{\textquotedblright}), and exploring the effect of the size of the cheat code. We find that the model is able to capture extra information from just a single float representation of the target and nearly reproduces the target with two 32-bit floats per target token.
null
null
10.18653/v1/2022.naacl-main.177
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,876
inproceedings
hauer-kondrak-2022-wic
{W}i{C} = {TSV} = {WSD}: On the Equivalence of Three Semantic Tasks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.178/
Hauer, Bradley and Kondrak, Grzegorz
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2478--2486
The Word-in-Context (WiC) task has attracted considerable attention in the NLP community, as demonstrated by the popularity of the recent MCL-WiC SemEval shared task. Systems and lexical resources from word sense disambiguation (WSD) are often used for the WiC task and WiC dataset construction. In this paper, we establish the exact relationship between WiC and WSD, as well as the related task of target sense verification (TSV). Building upon a novel hypothesis on the equivalence of sense and meaning distinctions, we demonstrate through the application of tools from theoretical computer science that these three semantic classification problems can be pairwise reduced to each other, and therefore are equivalent. The results of experiments that involve systems and datasets for both WiC and WSD provide strong empirical evidence that our problem reductions work in practice.
null
null
10.18653/v1/2022.naacl-main.178
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,877
inproceedings
kaushal-mahowald-2022-tokens
What do tokens know about their characters and how do they know it?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.179/
Kaushal, Ayush and Mahowald, Kyle
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2487--2507
Pre-trained language models (PLMs) that use subword tokenization schemes can succeed at a variety of language tasks that require character-level information, despite lacking explicit access to the character composition of tokens. Here, studying a range of models (e.g., GPT- J, BERT, RoBERTa, GloVe), we probe what word pieces encode about character-level information by training classifiers to predict the presence or absence of a particular alphabetical character in a token, based on its embedding (e.g., probing whether the model embedding for {\textquotedblleft}cat{\textquotedblright} encodes that it contains the character {\textquotedblleft}a{\textquotedblright}). We find that these models robustly encode character-level information and, in general, larger models perform better at the task. We show that these results generalize to characters from non-Latin alphabets (Arabic, Devanagari, and Cyrillic). Then, through a series of experiments and analyses, we investigate the mechanisms through which PLMs acquire English-language character information during training and argue that this knowledge is acquired through multiple phenomena, including a systematic relationship between particular characters and particular parts of speech, as well as natural variability in the tokenization of related strings.
null
null
10.18653/v1/2022.naacl-main.179
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,878
inproceedings
fabbri-etal-2022-answersumm
{A}nswer{S}umm: A Manually-Curated Dataset and Pipeline for Answer Summarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.180/
Fabbri, Alexander and Wu, Xiaojian and Iyer, Srini and Li, Haoran and Diab, Mona
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2508--2520
Community Question Answering (CQA) fora such as Stack Overflow and Yahoo! Answers contain a rich resource of answers to a wide range of community-based questions. Each question thread can receive a large number of answers with different perspectives. One goal of answer summarization is to produce a summary that reflects the range of answer perspectives. A major obstacle for this task is the absence of a dataset to provide supervision for producing such summaries. Recent works propose heuristics to create such data, but these are often noisy and do not cover all answer perspectives present. This work introduces a novel dataset of 4,631 CQA threads for answer summarization curated by professional linguists. Our pipeline gathers annotations for all subtasks of answer summarization, including relevant answer sentence selection, grouping these sentences based on perspectives, summarizing each perspective, and producing an overall summary. We analyze and benchmark state-of-the-art models on these subtasks and introduce a novel unsupervised approach for multi-perspective data augmentation that boosts summarization performance according to automatic evaluation. Finally, we propose reinforcement learning rewards to improve factual consistency and answer coverage and analyze areas for improvement.
null
null
10.18653/v1/2022.naacl-main.180
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,879
inproceedings
di-liello-etal-2022-paragraph
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.181/
Di Liello, Luca and Garg, Siddhant and Soldaini, Luca and Moschitti, Alessandro
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2521--2531
Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show that these tasks benefit from modeling dependencies across multiple candidate sentences jointly. In this paper, we first show that popular pre-trained transformers perform poorly when used for fine-tuning on multi-candidate inference tasks. We then propose a new pre-training objective that models the paragraph-level semantics across multiple input sentences. Our evaluation on three AS2 and one fact verification datasets demonstrates the superiority of our pre-training technique over the traditional ones for transformers used as joint models for multi-candidate inference tasks, as well as when used as cross-encoders for sentence-pair formulations of these tasks.
null
null
10.18653/v1/2022.naacl-main.181
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,880
inproceedings
nouri-2022-text
Text Style Transfer via Optimal Transport
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.182/
Nouri, Nasim
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2532--2541
Text style transfer (TST) is a well-known task whose goal is to convert the style of the text (e.g., from formal to informal) while preserving its content. Recently, it has been shown that both syntactic and semantic similarities between the source and the converted text are important for TST. However, the interaction between these two concepts has not been modeled. In this work, we propose a novel method based on Optimal Transport for TST to simultaneously incorporate syntactic and semantic information into similarity computation between the source and the converted text. We evaluate the proposed method in both supervised and unsupervised settings. Our analysis reveal the superiority of the proposed model in both settings.
null
null
10.18653/v1/2022.naacl-main.182
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,881
inproceedings
padmakumar-etal-2022-exploring
Exploring the Role of Task Transferability in Large-Scale Multi-Task Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.183/
Padmakumar, Vishakh and Lausen, Leonard and Ballesteros, Miguel and Zha, Sheng and He, He and Karypis, George
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2542--2550
Recent work has found that multi-task training with a large number of diverse tasks can uniformly improve downstream performance on unseen target tasks. In contrast, literature on task transferability has established that the choice of intermediate tasks can heavily affect downstream task performance. In this work, we aim to disentangle the effect of scale and relatedness of tasks in multi-task representation learning. We find that, on average, increasing the scale of multi-task learning, in terms of the number of tasks, indeed results in better learned representations than smaller multi-task setups. However, if the target tasks are known ahead of time, then training on a smaller set of related tasks is competitive to the large-scale multi-task training at a reduced computational cost.
null
null
10.18653/v1/2022.naacl-main.183
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,882
inproceedings
shapira-etal-2022-interactive
Interactive Query-Assisted Summarization via Deep Reinforcement Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.184/
Shapira, Ori and Pasunuru, Ramakanth and Bansal, Mohit and Dagan, Ido and Amsterdamer, Yael
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2551--2568
Interactive summarization is a task that facilitates user-guided exploration of information within a document set. While one would like to employ state of the art neural models to improve the quality of interactive summarization, many such technologies cannot ingest the full document set or cannot operate at sufficient speed for interactivity. To that end, we propose two novel deep reinforcement learning models for the task that address, respectively, the subtask of summarizing salient information that adheres to user queries, and the subtask of listing suggested queries to assist users throughout their exploration. In particular, our models allow encoding the interactive session state and history to refrain from redundancy. Together, these models compose a state of the art solution that addresses all of the task requirements. We compare our solution to a recent interactive summarization system, and show through an experimental study involving real users that our models are able to improve informativeness while preserving positive user experience.
null
null
10.18653/v1/2022.naacl-main.184
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,883
inproceedings
nouri-2022-data
Data Augmentation with Dual Training for Offensive Span Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.185/
Nouri, Nasim
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2569--2575
Recognizing offensive text is an important requirement for every content management system, especially for social networks. While the majority of the prior work formulate this problem as text classification, i.e., if a text excerpt is offensive or not, in this work we propose a novel model for offensive span detection (OSD), whose goal is to identify the spans responsible for the offensive tone of the text. One of the challenges to train a model for this novel setting is the lack of enough training data. To address this limitation, in this work we propose a novel method in which the large-scale pre-trained language model GPT-2 is employed to generate synthetic training data for OSD. In particular, we propose to train the GPT-2 model in a dual-training setting using the REINFORCE algorithm to generate in-domain, natural and diverse training samples. Extensive experiments on the benchmark dataset for OSD reveal the effectiveness of the proposed method.
null
null
10.18653/v1/2022.naacl-main.185
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,884
inproceedings
passban-etal-2022-training
Training Mixed-Domain Translation Models via Federated Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.186/
Passban, Peyman and Roosta, Tanya and Gupta, Rahul and Chadha, Ankit and Chung, Clement
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2576--2586
Training mixed-domain translation models is a complex task that demands tailored architec- tures and costly data preparation techniques. In this work, we leverage federated learning (FL) in order to tackle the problem. Our investiga- tion demonstrates that with slight modifications in the training process, neural machine trans- lation (NMT) engines can be easily adapted when an FL-based aggregation is applied to fuse different domains. Experimental results also show that engines built via FL are able to perform on par with state-of-the-art baselines that rely on centralized training techniques. We evaluate our hypothesis in the presence of five datasets with different sizes, from different domains, to translate from German into English and discuss how FL and NMT can mutually benefit from each other. In addition to provid- ing benchmarking results on the union of FL and NMT, we also propose a novel technique to dynamically control the communication band- width by selecting impactful parameters during FL updates. This is a significant achievement considering the large size of NMT engines that need to be exchanged between FL parties.
null
null
10.18653/v1/2022.naacl-main.186
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,885
inproceedings
fabbri-etal-2022-qafacteval
{QAF}act{E}val: Improved {QA}-Based Factual Consistency Evaluation for Summarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.187/
Fabbri, Alexander and Wu, Chien-Sheng and Liu, Wenhao and Xiong, Caiming
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2587--2601
Factual consistency is an essential quality of text summarization models in practical settings. Existing work in evaluating this dimension can be broadly categorized into two lines of research, entailment-based and question answering (QA)-based metrics, and different experimental setups often lead to contrasting conclusions as to which paradigm performs the best. In this work, we conduct an extensive comparison of entailment and QA-based metrics, demonstrating that carefully choosing the components of a QA-based metric, especially question generation and answerability classification, is critical to performance. Building on those insights, we propose an optimized metric, which we call QAFactEval, that leads to a 14{\%} average improvement over previous QA-based metrics on the SummaC factual consistency benchmark, and also outperforms the best-performing entailment-based metric. Moreover, we find that QA-based and entailment-based metrics can offer complementary signals and be combined into a single metric for a further performance boost.
null
null
10.18653/v1/2022.naacl-main.187
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,886
inproceedings
orgad-etal-2022-gender
How Gender Debiasing Affects Internal Model Representations, and Why It Matters
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.188/
Orgad, Hadas and Goldfarb-Tarrant, Seraphina and Belinkov, Yonatan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2602--2628
Common studies of gender bias in NLP focus either on extrinsic bias measured by model performance on a downstream task or on intrinsic bias found in models' internal representations. However, the relationship between extrinsic and intrinsic bias is relatively unknown. In this work, we illuminate this relationship by measuring both quantities together: we debias a model during downstream fine-tuning, which reduces extrinsic bias, and measure the effect on intrinsic bias, which is operationalized as bias extractability with information-theoretic probing. Through experiments on two tasks and multiple bias metrics, we show that our intrinsic bias metric is a better indicator of debiasing than (a contextual adaptation of) the standard WEAT metric, and can also expose cases of superficial debiasing. Our framework provides a comprehensive perspective on bias in NLP models, which can be applied to deploy NLP systems in a more informed manner. Our code and model checkpoints are publicly available.
null
null
10.18653/v1/2022.naacl-main.188
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,887
inproceedings
liu-etal-2022-structured
A Structured Span Selector
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.189/
Liu, Tianyu and Jiang, Yuchen and Cotterell, Ryan and Sachan, Mrinmaya
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2629--2641
Many natural language processing tasks, e.g., coreference resolution and semantic role labeling, require selecting text spans and making decisions about them. A typical approach to such tasks is to score all possible spans and greedily select spans for task-specific downstream processing. This approach, however, does not incorporate any inductive bias about what sort of spans ought to be selected, e.g., that selected spans tend to be syntactic constituents. In this paper, we propose a novel grammar-based structured span selection model which learns to make use of the partial span-level annotation provided for such problems. Compared to previous approaches, our approach gets rid of the heuristic greedy span selection scheme, allowing us to model the downstream task on an optimal set of spans. We evaluate our model on two popular span prediction tasks: coreference resolution and semantic role labeling; and show improvements on both.
null
null
10.18653/v1/2022.naacl-main.189
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,888
inproceedings
huang-etal-2022-unified
Unified Semantic Typing with Meaningful Label Inference
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.190/
Huang, James Y. and Li, Bangzheng and Xu, Jiashu and Chen, Muhao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2642--2654
Semantic typing aims at classifying tokens or spans of interest in a textual context into semantic categories such as relations, entity types, and event types. The inferred labels of semantic categories meaningfully interpret how machines understand components of text. In this paper, we present UniST, a unified framework for semantic typing that captures label semantics by projecting both inputs and labels into a joint semantic embedding space. To formulate different lexical and relational semantic typing tasks as a unified task, we incorporate task descriptions to be jointly encoded with the input, allowing UniST to be adapted to different tasks without introducing task-specific model components. UniST optimizes a margin ranking loss such that the semantic relatedness of the input and labels is reflected from their embedding similarity. Our experiments demonstrate that UniST achieves strong performance across three semantic typing tasks: entity typing, relation classification and event typing. Meanwhile, UniST effectively transfers semantic knowledge of labels and substantially improves generalizability on inferring rarely seen and unseen types. In addition, multiple semantic typing tasks can be jointly trained within the unified framework, leading to a single compact multi-tasking model that performs comparably to dedicated single-task models, while offering even better transferability.
null
null
10.18653/v1/2022.naacl-main.190
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,889
inproceedings
rubin-etal-2022-learning
Learning To Retrieve Prompts for In-Context Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.191/
Rubin, Ohad and Herzig, Jonathan and Berant, Jonathan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2655--2671
In-context learning is a recent paradigm in natural language understanding, where a large pre-trained language model (LM) observes a test instance and a few training examples as its input, and directly decodes the output without any update to its parameters. However, performance has been shown to strongly depend on the selected training examples (termed prompts). In this work, we propose an efficient method for retrieving prompts for in-context learning using annotated data and an LM. Given an input-output pair, we estimate the probability of the output given the input and a candidate training example as the prompt, and label training examples as positive or negative based on this probability. We then train an efficient dense retriever from this data, which is used to retrieve training examples as prompts at test time. We evaluate our approach on three sequence-to-sequence tasks where language utterances are mapped to meaning representations, and find that it substantially outperforms prior work and multiple baselines across the board.
null
null
10.18653/v1/2022.naacl-main.191
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,890
inproceedings
balkir-etal-2022-necessity
Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.192/
Balkir, Esma and Nejadgholi, Isar and Fraser, Kathleen and Kiritchenko, Svetlana
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2672--2686
We present a novel feature attribution method for explaining text classifiers, and analyze it in the context of hate speech detection. Although feature attribution models usually provide a single importance score for each token, we instead provide two complementary and theoretically-grounded scores {--} necessity and sufficiency {--} resulting in more informative explanations. We propose a transparent method that calculates these values by generating explicit perturbations of the input text, allowing the importance scores themselves to be explainable. We employ our method to explain the predictions of different hate speech detection models on the same set of curated examples from a test suite, and show that different values of necessity and sufficiency for identity terms correspond to different kinds of false positive errors, exposing sources of classifier bias against marginalized groups.
null
null
10.18653/v1/2022.naacl-main.192
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,891
inproceedings
ram-etal-2022-learning
Learning to Retrieve Passages without Supervision
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.193/
Ram, Ori and Shachaf, Gal and Levy, Omer and Berant, Jonathan and Globerson, Amir
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2687--2700
Dense retrievers for open-domain question answering (ODQA) have been shown to achieve impressive performance by training on large datasets of question-passage pairs. In this work we ask whether this dependence on labeled data can be reduced via unsupervised pretraining that is geared towards ODQA. We show this is in fact possible, via a novel pretraining scheme designed for retrieval. Our {\textquotedblleft}recurring span retrieval{\textquotedblright} approach uses recurring spans across passages in a document to create pseudo examples for contrastive learning. Our pretraining scheme directly controls for term overlap across pseudo queries and relevant passages, thus allowing to model both lexical and semantic relations between them. The resulting model, named Spider, performs surprisingly well without any labeled training examples on a wide range of ODQA datasets. Specifically, it significantly outperforms all other pretrained baselines in a zero-shot setting, and is competitive with BM25, a strong sparse baseline. Moreover, a hybrid retriever over Spider and BM25 improves over both, and is often competitive with DPR models, which are trained on tens of thousands of examples. Last, notable gains are observed when using Spider as an initialization for supervised training.
null
null
10.18653/v1/2022.naacl-main.193
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,892
inproceedings
glass-etal-2022-re2g
{R}e2{G}: Retrieve, Rerank, Generate
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.194/
Glass, Michael and Rossiello, Gaetano and Chowdhury, Md Faisal Mahbub and Naik, Ankita and Cai, Pengshan and Gliozzo, Alfio
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2701--2715
As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.
null
null
10.18653/v1/2022.naacl-main.194
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,893
inproceedings
rusert-srinivasan-2022-dont
Don`t sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.195/
Rusert, Jonathan and Srinivasan, Padmini
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2716--2725
Deep learning (DL) is being used extensively for text classification. However, researchers have demonstrated the vulnerability of such classifiers to adversarial attacks. Attackers modify the text in a way which misleads the classifier while keeping the original meaning close to intact. State-of-the-art (SOTA) attack algorithms follow the general principle of making minimal changes to the text so as to not jeopardize semantics. Taking advantage of this we propose a novel and intuitive defense strategy called Sample Shielding.It is attacker and classifier agnostic, does not require any reconfiguration of the classifier or external resources and is simple to implement. Essentially, we sample subsets of the input text, classify them and summarize these into a final decision. We shield three popular DL text classifiers with Sample Shielding, test their resilience against four SOTA attackers across three datasets in a realistic threat setting. Even when given the advantage of knowing about our shielding strategy the adversary`s attack success rate is {\ensuremath{<}}=10{\%} with only one exception and often {\ensuremath{<}} 5{\%}. Additionally, Sample Shielding maintains near original accuracy when applied to original texts. Crucially, we show that the {\textquoteleft}make minimal changes' approach of SOTA attackers leads to critical vulnerabilities that can be defended against with an intuitive sampling strategy.
null
null
10.18653/v1/2022.naacl-main.195
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,894
inproceedings
sharma-etal-2022-federated
Federated Learning with Noisy User Feedback
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.196/
Sharma, Rahul and Ramakrishna, Anil and MacLaughlin, Ansel and Rumshisky, Anna and Majmudar, Jimit and Chung, Clement and Avestimehr, Salman and Gupta, Rahul
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2726--2739
Machine Learning (ML) systems are getting increasingly popular, and drive more and more applications and services in our daily life. Thishas led to growing concerns over user privacy, since human interaction data typically needs to be transmitted to the cloud in order to trainand improve such systems. Federated learning (FL) has recently emerged as a method for training ML models on edge devices using sensitive user data and is seen as a way to mitigate concerns over data privacy. However, since ML models are most commonly trained with label supervision, we need a way to extract labels on edge to make FL viable. In this work, we propose a strategy for training FL models using positive and negative user feedback. We also design a novel framework to study different noise patterns in user feedback, and explore how well standard noise-robust objectives can help mitigate this noise when training models in a federated setting. We evaluate our proposed training setup through detailed experiments on two text classification datasets and analyze the effects of varying levels of user reliability and feedback noise on model performance. We show that our method improves substantially over a self-training baseline, achieving performance closer to models trained with full supervision.
null
null
10.18653/v1/2022.naacl-main.196
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,895
inproceedings
kaneko-etal-2022-gender
Gender Bias in Masked Language Models for Multiple Languages
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.197/
Kaneko, Masahiro and Imankulova, Aizhan and Bollegala, Danushka and Okazaki, Naoaki
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2740--2750
Masked Language Models (MLMs) pre-trained by predicting masked tokens on large corpora have been used successfully in natural language processing tasks for a variety of languages. Unfortunately, it was reported that MLMs also learn discriminative biases regarding attributes such as gender and race. Because most studies have focused on MLMs in English, the bias of MLMs in other languages has rarely been investigated. Manual annotation of evaluation data for languages other than English has been challenging due to the cost and difficulty in recruiting annotators. Moreover, the existing bias evaluation methods require the stereotypical sentence pairs consisting of the same context with attribute words (e.g. He/She is a nurse).We propose Multilingual Bias Evaluation (MBE) score, to evaluate bias in various languages using only English attribute word lists and parallel corpora between the target language and English without requiring manually annotated data. We evaluated MLMs in eight languages using the MBE and confirmed that gender-related biases are encoded in MLMs for all those languages. We manually created datasets for gender bias in Japanese and Russian to evaluate the validity of the MBE.The results show that the bias scores reported by the MBE significantly correlates with that computed from the above manually created datasets and the existing English datasets for gender bias.
null
null
10.18653/v1/2022.naacl-main.197
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,896
inproceedings
toledo-ronen-etal-2022-multi
Multi-Domain Targeted Sentiment Analysis
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.198/
Toledo-Ronen, Orith and Orbach, Matan and Katz, Yoav and Slonim, Noam
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2751--2762
Targeted Sentiment Analysis (TSA) is a central task for generating insights from consumer reviews. Such content is extremely diverse, with sites like Amazon or Yelp containing reviews on products and businesses from many different domains. A real-world TSA system should gracefully handle that diversity. This can be achieved by a multi-domain model {--} one that is robust to the domain of the analyzed texts, and performs well on various domains. To address this scenario, we present a multi-domain TSA system based on augmenting a given training set with diverse weak labels from assorted domains. These are obtained through self-training on the Yelp reviews corpus. Extensive experiments with our approach on three evaluation datasets across different domains demonstrate the effectiveness of our solution. We further analyze how restrictions imposed on the available labeled data affect the performance, and compare the proposed method to the costly alternative of manually gathering diverse TSA labeled data. Our results and analysis show that our approach is a promising step towards a practical domain-robust TSA system.
null
null
10.18653/v1/2022.naacl-main.198
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,897
inproceedings
utama-etal-2022-falsesum
Falsesum: Generating Document-level {NLI} Examples for Recognizing Factual Inconsistency in Summarization
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.199/
Utama, Prasetya and Bambrick, Joshua and Moosavi, Nafise and Gurevych, Iryna
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2763--2776
Neural abstractive summarization models are prone to generate summaries that are factually inconsistent with their source documents. Previous work has introduced the task of recognizing such factual inconsistency as a downstream application of natural language inference (NLI). However, state-of-the-art NLI models perform poorly in this context due to their inability to generalize to the target task. In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples. We introduce Falsesum, a data generation pipeline leveraging a controllable text generation model to perturb human-annotated summaries, introducing varying types of factual inconsistencies. Unlike previously introduced document-level NLI datasets, our generated dataset contains examples that are diverse and inconsistent yet plausible. We show that models trained on a Falsesum-augmented NLI dataset improve the state-of-the-art performance across four benchmarks for detecting factual inconsistency in summarization.
null
null
10.18653/v1/2022.naacl-main.199
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,898
inproceedings
fetahu-etal-2022-dynamic
Dynamic Gazetteer Integration in Multilingual Models for Cross-Lingual and Cross-Domain Named Entity Recognition
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.200/
Fetahu, Besnik and Fang, Anjie and Rokhlenko, Oleg and Malmasi, Shervin
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2777--2790
Named entity recognition (NER) in a real-world setting remains challenging and is impacted by factors like text genre, corpus quality, and data availability. NER models trained on CoNLL do not transfer well to other domains, even within the same language. This is especially the case for multi-lingual models when applied to low-resource languages, and is mainly due to missing entity information. We propose an approach that with limited effort and data, addresses the NER knowledge gap across languages and domains. Our novel approach uses a token-level gating layer to augment pre-trained multilingual transformers with gazetteers containing named entities (NE) from a target language or domain. This approach provides the flexibility to jointly integrate both textual and gazetteer information dynamically: entity knowledge from gazetteers is used only when a token`s textual representation is insufficient for the NER task. Evaluation on several languages and domains demonstrates: (i) a high mismatch of reported NER performance on CoNLL vs. domain specific datasets, (ii) gazetteers significantly improve NER performance across languages and domains, and (iii) gazetteers can be flexibly incorporated to guide knowledge transfer. On cross-lingual transfer we achieve an improvement over the baseline with F1=+17.6{\%}, and with F1=+21.3{\%} for cross-domain transfer.
null
null
10.18653/v1/2022.naacl-main.200
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,899
inproceedings
min-etal-2022-metaicl
{M}eta{ICL}: Learning to Learn In Context
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.201/
Min, Sewon and Lewis, Mike and Zettlemoyer, Luke and Hajishirzi, Hannaneh
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2791--2809
We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learning on a large set of training tasks. This meta-training enables the model to more effectively learn a new task in context at test time, by simply conditioning on a few training examples with no parameter updates or task-specific templates. We experiment on a large, diverse collection of tasks consisting of 142 NLP datasets including classification, question answering, natural language inference, paraphrase detection and more, across seven different meta-training/target splits. MetaICL outperforms a range of baselines including in-context learning without meta-training and multi-task learning followed by zero-shot transfer. We find that the gains are particularly significant for target tasks that have domain shifts from the meta-training tasks, and that using a diverse set of the meta-training tasks is key to improvements. We also show that MetaICL approaches (and sometimes beats) the performance of models fully finetuned on the target task training data, and outperforms much bigger models with nearly 8x parameters.
null
null
10.18653/v1/2022.naacl-main.201
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,900
inproceedings
li-etal-2022-enhancing-knowledge
Enhancing Knowledge Selection for Grounded Dialogues via Document Semantic Graphs
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.202/
Li, Sha and Namazifar, Mahdi and Jin, Di and Bansal, Mohit and Ji, Heng and Liu, Yang and Hakkani-Tur, Dilek
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2810--2823
Providing conversation models with background knowledge has been shown to make open-domain dialogues more informative and engaging. Existing models treat knowledge selection as a sentence ranking or classification problem where each sentence is handled individually, ignoring the internal semantic connection between sentences. In this work, we propose to automatically convert the background knowledge documents into document semantic graphs and then perform knowledge selection over such graphs. Our document semantic graphs preserve sentence-level information through the use of sentence nodes and provide concept connections between sentences. We apply multi-task learning to perform sentence-level knowledge selection and concept-level knowledge selection, showing that it improves sentence-level selection. Our experiments show that our semantic graph-based knowledge selection improves over sentence selection baselines for both the knowledge selection task and the end-to-end response generation task on HollE and improves generalization on unseen topics in WoW.
null
null
10.18653/v1/2022.naacl-main.202
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,901
inproceedings
alnegheimish-etal-2022-using
Using Natural Sentence Prompts for Understanding Biases in Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.203/
Alnegheimish, Sarah and Guo, Alicia and Sun, Yi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2824--2830
Evaluation of biases in language models is often limited to synthetically generated datasets. This dependence traces back to the need of prompt-style dataset to trigger specific behaviors of language models. In this paper, we address this gap by creating a prompt dataset with respect to occupations collected from real-world natural sentences present in Wikipedia.We aim to understand the differences between using template-based prompts and natural sentence prompts when studying gender-occupation biases in language models. We find bias evaluations are very sensitiveto the design choices of template prompts, and we propose using natural sentence prompts as a way of more systematically using real-world sentences to move away from design decisions that may bias the results.
null
null
10.18653/v1/2022.naacl-main.203
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,902
inproceedings
mehrabi-etal-2022-robust
Robust Conversational Agents against Imperceptible Toxicity Triggers
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.204/
Mehrabi, Ninareh and Beirami, Ahmad and Morstatter, Fred and Galstyan, Aram
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2831--2847
Warning: this paper contains content that maybe offensive or upsetting. Recent research in Natural Language Processing (NLP) has advanced the development of various toxicity detection models with the intention of identifying and mitigating toxic language from existing systems. Despite the abundance of research in this area, less attention has been given to adversarial attacks that force the system to generate toxic language and the defense against them. Existing work to generate such attacks is either based on human-generated attacks which is costly and not scalable or, in case of automatic attacks, the attack vector does not conform to human-like language, which can be detected using a language model loss. In this work, we propose attacks against conversational agents that are imperceptible, i.e., they fit the conversation in terms of coherency, relevancy, and fluency, while they are effective and scalable, i.e., they can automatically trigger the system into generating toxic language. We then propose a defense mechanism against such attacks which not only mitigates the attack but also attempts to maintain the conversational flow. Through automatic and human evaluations, we show that our defense is effective at avoiding toxic language generation even against imperceptible toxicity triggers while the generated language fits the conversation in terms of coherency and relevancy. Lastly, we establish the generalizability of such a defense mechanism on language generation models beyond conversational agents.
null
null
10.18653/v1/2022.naacl-main.204
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,903
inproceedings
shi-etal-2022-selective
Selective Differential Privacy for Language Modeling
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.205/
Shi, Weiyan and Cui, Aiqi and Li, Evan and Jia, Ruoxi and Yu, Zhou
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2848--2859
With the increasing applications of language models, it has become crucial to protect these models from leaking private information. Previous work has attempted to tackle this challenge by training RNN-based language models with differential privacy guarantees. However, applying classical differential privacy to language models leads to poor model performance as the underlying privacy notion is over-pessimistic and provides undifferentiated protection for all tokens in the data. Given that the private information in natural language is sparse (for example, the bulk of an email might not carry personally identifiable information), we propose a new privacy notion, selective differential privacy, to provide rigorous privacy guarantees on the sensitive portion of the data to improve model utility. To realize such a new notion, we develop a corresponding privacy mechanism, Selective-DPSGD, for RNN-based language models. Besides language modeling, we also apply the method to a more concrete application {--} dialog systems. Experiments on both language modeling and dialog system building show that the proposed privacy-preserving mechanism achieves better utilities while remaining safe under various privacy attacks compared to the baselines. The data and code are released at \url{https://github.com/wyshi/lm_privacy} to facilitate future research.
null
null
10.18653/v1/2022.naacl-main.205
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,904
inproceedings
ebert-etal-2022-trajectories
Do Trajectories Encode Verb Meaning?
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.206/
Ebert, Dylan and Sun, Chen and Pavlick, Ellie
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2860--2871
Distributional models learn representations of words from text, but are criticized for their lack of grounding, or the linking of text to the non-linguistic world. Grounded language models have had success in learning to connect concrete categories like nouns and adjectives to the world via images and videos, but can struggle to isolate the meaning of the verbs themselves from the context in which they typically occur. In this paper, we investigate the extent to which trajectories (i.e. the position and rotation of objects over time) naturally encode verb semantics. We build a procedurally generated agent-object-interaction dataset, obtain human annotations for the verbs that occur in this data, and compare several methods for representation learning given the trajectories. We find that trajectories correlate as-is with some verbs (e.g., fall), and that additional abstraction via self-supervised pretraining can further capture nuanced differences in verb meaning (e.g., roll and slide).
null
null
10.18653/v1/2022.naacl-main.206
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,905
inproceedings
caciularu-etal-2022-long
Long Context Question Answering via Supervised Contrastive Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.207/
Caciularu, Avi and Dagan, Ido and Goldberger, Jacob and Cohan, Arman
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2872--2879
Long-context question answering (QA) tasks require reasoning over a long document or multiple documents. Addressing these tasks often benefits from identifying a set of evidence spans (e.g., sentences), which provide supporting evidence for answering the question. In this work, we propose a novel method for equipping long-context QA models with an additional sequence-level objective for better identification of the supporting evidence. We achieve this via an additional contrastive supervision signal in finetuning, where the model is encouraged to explicitly discriminate supporting evidence sentences from negative ones by maximizing question-evidence similarity. The proposed additional loss exhibits consistent improvements on three different strong long-context transformer models, across two challenging question answering benchmarks {--} HotpotQA and QAsper.
null
null
10.18653/v1/2022.naacl-main.207
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,906
inproceedings
yaneva-etal-2022-usmle
The {USMLE}{\textregistered} Step 2 Clinical Skills Patient Note Corpus
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.208/
Yaneva, Victoria and Mee, Janet and Ha, Le and Harik, Polina and Jodoin, Michael and Mechaber, Alex
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2880--2886
This paper presents a corpus of 43,985 clinical patient notes (PNs) written by 35,156 examinees during the high-stakes USMLE{\textregistered} Step 2 Clinical Skills examination. In this exam, examinees interact with standardized patients - people trained to portray simulated scenarios called clinical cases. For each encounter, an examinee writes a PN, which is then scored by physician raters using a rubric of clinical concepts, expressions of which should be present in the PN. The corpus features PNs from 10 clinical cases, as well as the clinical concepts from the case rubrics. A subset of 2,840 PNs were annotated by 10 physician experts such that all 143 concepts from the case rubrics (e.g., shortness of breath) were mapped to 34,660 PN phrases (e.g., dyspnea, difficulty breathing). The corpus is available via a data sharing agreement with NBME and can be requested at \url{https://www.nbme.org/services/data-sharing}.
null
null
10.18653/v1/2022.naacl-main.208
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,907
inproceedings
hakami-etal-2022-learning
Learning to Borrow{--} Relation Representation for Without-Mention Entity-Pairs for Knowledge Graph Completion
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.209/
Hakami, Huda and Hakami, Mona and Mandya, Angrosh and Bollegala, Danushka
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2887--2898
Prior work on integrating text corpora with knowledge graphs (KGs) to improve Knowledge Graph Embedding (KGE) have obtained good performance for entities that co-occur in sentences in text corpora. Such sentences (textual mentions of entity-pairs) are represented as Lexicalised Dependency Paths (LDPs) between two entities. However, it is not possible to represent relations between entities that do not co-occur in a single sentence using LDPs. In this paper, we propose and evaluate several methods to address this problem, where we \textit{borrow} LDPs from the entity pairs that co-occur in sentences in the corpus (i.e. \textit{with mentions} entity pairs) to represent entity pairs that do \textit{not} co-occur in any sentence in the corpus (i.e. \textit{without mention} entity pairs). We propose a supervised borrowing method, \textit{SuperBorrow}, that learns to score the suitability of an LDP to represent a without-mentions entity pair using pre-trained entity embeddings and contextualised LDP representations. Experimental results show that SuperBorrow improves the link prediction performance of multiple widely-used prior KGE methods such as TransE, DistMult, ComplEx and RotatE.
null
null
10.18653/v1/2022.naacl-main.209
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,908
inproceedings
ayoola-etal-2022-improving
Improving Entity Disambiguation by Reasoning over a Knowledge Base
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.210/
Ayoola, Tom and Fisher, Joseph and Pierleoni, Andrea
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2899--2912
Recent work in entity disambiguation (ED) has typically neglected structured knowledge base (KB) facts, and instead relied on a limited subset of KB information, such as entity descriptions or types. This limits the range of contexts in which entities can be disambiguated. To allow the use of all KB facts, as well as descriptions and types, we introduce an ED model which links entities by reasoning over a symbolic knowledge base in a fully differentiable fashion. Our model surpasses state-of-the-art baselines on six well-established ED datasets by 1.3 F1 on average. By allowing access to all KB information, our model is less reliant on popularity-based entity priors, and improves performance on the challenging ShadowLink dataset (which emphasises infrequent and ambiguous entities) by 12.7 F1.
null
null
10.18653/v1/2022.naacl-main.210
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,909
inproceedings
yao-etal-2022-modal
Modal Dependency Parsing via Language Model Priming
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.211/
Yao, Jiarui and Xue, Nianwen and Min, Bonan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2913--2919
The task of modal dependency parsing aims to parse a text into its modal dependency structure, which is a representation for the factuality of events in the text. We design a modal dependency parser that is based on priming pre-trained language models, and evaluate the parser on two data sets. Compared to baselines, we show an improvement of 2.6{\%} in F-score for English and 4.6{\%} for Chinese. To the best of our knowledge, this is also the first work on Chinese modal dependency parsing.
null
null
10.18653/v1/2022.naacl-main.211
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,910
inproceedings
xu-etal-2022-document
Document-Level Relation Extraction with Sentences Importance Estimation and Focusing
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.212/
Xu, Wang and Chen, Kehai and Mou, Lili and Zhao, Tiejun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2920--2929
Document-level relation extraction (DocRE) aims to determine the relation between two entities from a document of multiple sentences. Recent studies typically represent the entire document by sequence- or graph-based models to predict the relations of all entity pairs. However, we find that such a model is not robust and exhibits bizarre behaviors: it predicts correctly when an entire test document is fed as input, but errs when non-evidence sentences are removed. To this end, we propose a Sentence Importance Estimation and Focusing (SIEF) framework for DocRE, where we design a sentence importance score and a sentence focusing loss, encouraging DocRE models to focus on evidence sentences. Experimental results on two domains show that our SIEF not only improves overall performance, but also makes DocRE models more robust. Moreover, SIEF is a general framework, shown to be effective when combined with a variety of base DocRE models.
null
null
10.18653/v1/2022.naacl-main.212
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,911
inproceedings
xiao-etal-2022-datasets
Are All the Datasets in Benchmark Necessary? A Pilot Study of Dataset Evaluation for Text Classification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.213/
Xiao, Yang and Fu, Jinlan and Ng, See-Kiong and Liu, Pengfei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2930--2941
In this paper, we ask the research question of whether all the datasets in the benchmark are necessary. We approach this by first characterizing the distinguishability of datasets when comparing different systems. Experiments on 9 datasets and 36 systems show that several existing benchmark datasets contribute little to discriminating top-scoring systems, while those less used datasets exhibit impressive discriminative power. We further, taking the text classification task as a case study, investigate the possibility of predicting dataset discrimination based on its properties (e.g., average sentence length). Our preliminary experiments promisingly show that given a sufficient number of training experimental records, a meaningful predictor can be learned to estimate dataset discrimination over unseen datasets. We released all datasets with features explored in this work on DataLab.
null
null
10.18653/v1/2022.naacl-main.213
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,912
inproceedings
gan-etal-2022-triggerless
Triggerless Backdoor Attack for {NLP} Tasks with Clean Labels
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.214/
Gan, Leilei and Li, Jiwei and Zhang, Tianwei and Li, Xiaoya and Meng, Yuxian and Wu, Fei and Yang, Yi and Guo, Shangwei and Fan, Chun
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2942--2952
Backdoor attacks pose a new threat to NLP models. A standard strategy to construct poisoned data in backdoor attacks is to insert triggers (e.g., rare words) into selected sentences and alter the original label to a target label. This strategy comes with a severe flaw of being easily detected from both the trigger and the label perspectives: the trigger injected, which is usually a rare word, leads to an abnormal natural language expression, and thus can be easily detected by a defense model; the changed target label leads the example to be mistakenly labeled, and thus can be easily detected by manual inspections. To deal with this issue, in this paper, we propose a new strategy to perform textual backdoor attack which does not require an external trigger and the poisoned samples are correctly labeled. The core idea of the proposed strategy is to construct clean-labeled examples, whose labels are correct but can lead to test label changes when fused with the training set. To generate poisoned clean-labeled examples, we propose a sentence generation model based on the genetic algorithm to cater to the non-differentiable characteristic of text data. Extensive experiments demonstrate that the proposed attacking strategy is not only effective, but more importantly, hard to defend due to its triggerless and clean-labeled nature. Our work marks the first step towards developing triggerless attacking strategies in NLP.
null
null
10.18653/v1/2022.naacl-main.214
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,913
inproceedings
chaffin-etal-2022-ppl
{PPL-MCTS}: {C}onstrained Textual Generation Through Discriminator-Guided {MCTS} Decoding
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.215/
Chaffin, Antoine and Claveau, Vincent and Kijak, Ewa
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2953--2967
Large language models (LM) based on Transformers allow to generate plausible long texts. In this paper, we explore how this generation can be further controlled at decoding time to satisfy certain constraints (e.g. being non-toxic, conveying certain emotions, using a specific writing style, etc.) without fine-tuning the LM.Precisely, we formalize constrained generation as a tree exploration process guided by a discriminator that indicates how well the associated sequence respects the constraint. This approach, in addition to being easier and cheaper to train than fine-tuning the LM, allows to apply the constraint more finely and dynamically. We propose several original methods to search this generation tree, notably the Monte Carlo Tree Search (MCTS) which provides theoretical guarantees on the search efficiency, but also simpler methods based on re-ranking a pool of diverse sequences using the discriminator scores. These methods are evaluated, with automatic and human-based metrics, on two types of constraints and languages: review polarity and emotion control in French and English. We show that discriminator-guided MCTS decoding achieves state-of-the-art results without having to tune the language model, in both tasks and languages. We also demonstrate that other proposed decoding methods based on re-ranking can be really effective when diversity among the generated propositions is encouraged.
null
null
10.18653/v1/2022.naacl-main.215
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,914
inproceedings
qu-etal-2022-interpretable
Interpretable Proof Generation via Iterative Backward Reasoning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.216/
Qu, Hanhao and Cao, Yu and Gao, Jun and Ding, Liang and Xu, Ruifeng
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2968--2981
We present IBR, an Iterative Backward Reasoning model to solve the proof generation tasks on rule-based Question Answering (QA), where models are required to reason over a series of textual rules and facts to find out the related proof path and derive the final answer. We handle the limitations of existed works in two folds: 1) enhance the interpretability of reasoning procedures with detailed tracking, by predicting nodes and edges in the proof path iteratively backward from the question; 2) promote the efficiency and accuracy via reasoning on the elaborate representations of nodes and history paths, without any intermediate texts that may introduce external noise during proof generation. There are three main modules in IBR, QA and proof strategy prediction to obtain the answer and offer guidance for the following procedure; parent node prediction to determine a node in the existing proof that a new child node will link to; child node prediction to find out which new node will be added to the proof. Experiments on both synthetic and paraphrased datasets demonstrate that IBR has better in-domain performance as well as cross-domain transferability than several strong baselines. Our code and models are available at \url{https://github.com/find-knowledge/IBR}.
null
null
10.18653/v1/2022.naacl-main.216
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,915
inproceedings
long-etal-2022-domain
Domain Confused Contrastive Learning for Unsupervised Domain Adaptation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.217/
Long, Quanyu and Luo, Tianze and Wang, Wenya and Pan, Sinno
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2982--2995
In this work, we study Unsupervised Domain Adaptation (UDA) in a challenging self-supervised approach. One of the difficulties is how to learn task discrimination in the absence of target labels. Unlike previous literature which directly aligns cross-domain distributions or leverages reverse gradient, we propose Domain Confused Contrastive Learning (DCCL), which can bridge the source and target domains via domain puzzles, and retain discriminative representations after adaptation. Technically, DCCL searches for a most domain-challenging direction and exquisitely crafts domain confused augmentations as positive pairs, then it contrastively encourages the model to pull representations towards the other domain, thus learning more stable and effective domain invariances. We also investigate whether contrastive learning necessarily helps with UDA when performing other data augmentations. Extensive experiments demonstrate that DCCL significantly outperforms baselines, further ablation study and analysis also show the effectiveness and availability of DCCL.
null
null
10.18653/v1/2022.naacl-main.217
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,916
inproceedings
chai-strube-2022-incorporating
Incorporating Centering Theory into Neural Coreference Resolution
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.218/
Chai, Haixia and Strube, Michael
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2996--3002
In recent years, transformer-based coreference resolution systems have achieved remarkable improvements on the CoNLL dataset. However, how coreference resolvers can benefit from discourse coherence is still an open question. In this paper, we propose to incorporate centering transitions derived from centering theory in the form of a graph into a neural coreference model. Our method improves the performance over the SOTA baselines, especially on pronoun resolution in long documents, formal well-structured text, and clusters with scattered mentions.
null
null
10.18653/v1/2022.naacl-main.218
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,917
inproceedings
xu-etal-2022-progressive
Progressive Class Semantic Matching for Semi-supervised Text Classification
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.219/
Xu, Haiming and Liu, Lingqiao and Abbasnejad, Ehsan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3003--3013
Semi-supervised learning is a promising way to reduce the annotation cost for text-classification. Combining with pre-trained language models (PLMs), e.g., BERT, recent semi-supervised learning methods achieved impressive performance. In this work, we further investigate the marriage between semi-supervised learning and a pre-trained language model. Unlike existing approaches that utilize PLMs only for model parameter initialization, we explore the inherent topic matching capability inside PLMs for building a more powerful semi-supervised learning approach. Specifically, we propose a joint semi-supervised learning process that can progressively build a standard $K$-way classifier and a matching network for the input text and the Class Semantic Representation (CSR). The CSR will be initialized from the given labeled sentences and progressively updated through the training process. By means of extensive experiments, we show that our method can not only bring remarkable improvement to baselines, but also overall be more stable, and achieves state-of-the-art performance in semi-supervised text classification.
null
null
10.18653/v1/2022.naacl-main.219
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,918
inproceedings
li-etal-2022-low
Low Resource Style Transfer via Domain Adaptive Meta Learning
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.220/
Li, Xiangyang and Long, Xiang and Xia, Yu and Li, Sujian
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3014--3026
Text style transfer (TST) without parallel data has achieved some practical success. However, most of the existing unsupervised text style transfer methods suffer from (i) requiring massive amounts of non-parallel data to guide transferring different text styles. (ii) colossal performance degradation when fine-tuning the model in new domains. In this work, we propose DAML-ATM (Domain Adaptive Meta-Learning with Adversarial Transfer Model), which consists of two parts: DAML and ATM. DAML is a domain adaptive meta-learning approach to learn general knowledge in multiple heterogeneous source domains, capable of adapting to new unseen domains with a small amount of data. Moreover, we propose a new unsupervised TST approach Adversarial Transfer Model (ATM), composed of a sequence-to-sequence pre-trained language model and uses adversarial style training for better content preservation and style transfer. Results on multi-domain datasets demonstrate that our approach generalizes well on unseen low-resource domains, achieving state-of-the-art results against ten strong baselines.
null
null
10.18653/v1/2022.naacl-main.220
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,919
inproceedings
ramponi-tonelli-2022-features
Features or Spurious Artifacts? Data-centric Baselines for Fair and Robust Hate Speech Detection
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.221/
Ramponi, Alan and Tonelli, Sara
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3027--3040
Avoiding to rely on dataset artifacts to predict hate speech is at the cornerstone of robust and fair hate speech detection. In this paper we critically analyze lexical biases in hate speech detection via a cross-platform study, disentangling various types of spurious and authentic artifacts and analyzing their impact on out-of-distribution fairness and robustness. We experiment with existing approaches and propose simple yet surprisingly effective data-centric baselines. Our results on English data across four platforms show that distinct spurious artifacts require different treatments to ultimately attain both robustness and fairness in hate speech detection. To encourage research in this direction, we release all baseline models and the code to compute artifacts, pointing it out as a complementary and necessary addition to the data statements practice.
null
null
10.18653/v1/2022.naacl-main.221
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,920
inproceedings
zhou-mao-2022-document
Document-Level Event Argument Extraction by Leveraging Redundant Information and Closed Boundary Loss
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.222/
Zhou, Hanzhang and Mao, Kezhi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3041--3052
In document-level event argument extraction, an argument is likely to appear multiple times in different expressions in the document. The redundancy of arguments underlying multiple sentences is beneficial but is often overlooked. In addition, in event argument extraction, most entities are regarded as class {\textquotedblleft}others{\textquotedblright}, i.e. Universum class, which is defined as a collection of samples that do not belong to any class of interest. Universum class is composed of heterogeneous entities without typical common features. Classifiers trained by cross entropy loss could easily misclassify the Universum class because of their open decision boundary. In this paper, to make use of redundant event information underlying a document, we build an entity coreference graph with the graph2token module to produce a comprehensive and coreference-aware representation for every entity and then build an entity summary graph to merge the multiple extraction results. To better classify Universum class, we propose a new loss function to build classifiers with closed boundaries. Experimental results show that our model outperforms the previous state-of-the-art models by 3.35{\%} in F1-score.
null
null
10.18653/v1/2022.naacl-main.222
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,921
inproceedings
adelani-etal-2022-thousand
A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.223/
Adelani, David Ifeoluwa and Alabi, Jesujoba Oluwadara and Fan, Angela and Kreutzer, Julia and Shen, Xiaoyu and Reid, Machel and Ruiter, Dana and Klakow, Dietrich and Nabende, Peter and Chang, Ernie and Gwadabe, Tajuddeen and Sackey, Freshia and Dossou, Bonaventure F. P. and Emezue, Chris and Leong, Colin and Beukman, Michael and Muhammad, Shamsuddeen H. and Jarso, Guyo D. and Yousuf, Oreen and Niyongabo Rubungo, Andre N. and Hacheme, Gilles and Wairagala, Eric Peter and Nasir, Muhammad Umair and Ajibade, Benjamin A. and Ajayi, Tunde Oluwaseyi and Gitau, Yvonne Wambui and Abbott, Jade and Ahmed, Mohamed and Ochieng, Millicent and Aremu, Anuoluwapo and Ogayo, Perez and Mukiibi, Jonathan and Ouoba Kabore, Fatoumata and Kalipe, Godson Koffi and Mbaye, Derguene and Tapo, Allahsera Auguste and Memdjokam Koagne, Victoire M. and Munkoh-Buabeng, Edwin and Wagner, Valencia and Abdulmumin, Idris and Awokoya, Ayodele and Buzaaba, Happy and Sibanda, Blessing and Bukula, Andiswa and Manthalu, Sam
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3053--3070
Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.
null
null
10.18653/v1/2022.naacl-main.223
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,922
inproceedings
wang-etal-2022-rely
Should We Rely on Entity Mentions for Relation Extraction? Debiasing Relation Extraction with Counterfactual Analysis
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.224/
Wang, Yiwei and Chen, Muhao and Zhou, Wenxuan and Cai, Yujun and Liang, Yuxuan and Liu, Dayiheng and Yang, Baosong and Liu, Juncheng and Hooi, Bryan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3071--3081
Recent literature focuses on utilizing the entity information in the sentence-level relation extraction (RE), but this risks leaking superficial and spurious clues of relations. As a result, RE still suffers from unintended entity bias, i.e., the spurious correlation between entity mentions (names) and relations. Entity bias can mislead the RE models to extract the relations that do not exist in the text. To combat this issue, some previous work masks the entity mentions to prevent the RE models from over-fitting entity mentions. However, this strategy degrades the RE performance because it loses the semantic information of entities. In this paper, we propose the CoRE (Counterfactual Analysis based Relation Extraction) debiasing method that guides the RE models to focus on the main effects of textual context without losing the entity information. We first construct a causal graph for RE, which models the dependencies between variables in RE models. Then, we propose to conduct counterfactual analysis on our causal graph to distill and mitigate the entity bias, that captures the causal effects of specific entity mentions in each instance. Note that our CoRE method is model-agnostic to debias existing RE systems during inference without changing their training processes. Extensive experimental results demonstrate that our CoRE yields significant gains on both effectiveness and generalization for RE. The source code is provided at: \url{https://github.com/vanoracai/CoRE}.
null
null
10.18653/v1/2022.naacl-main.224
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,923
inproceedings
sajjad-etal-2022-analyzing
Analyzing Encoded Concepts in Transformer Language Models
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.225/
Sajjad, Hassan and Durrani, Nadir and Dalvi, Fahim and Alam, Firoj and Khan, Abdul and Xu, Jia
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3082--3101
We propose a novel framework ConceptX, to analyze how latent concepts are encoded in representations learned within pre-trained lan-guage models. It uses clustering to discover the encoded concepts and explains them by aligning with a large set of human-defined concepts. Our analysis on seven transformer language models reveal interesting insights: i) the latent space within the learned representations overlap with different linguistic concepts to a varying degree, ii) the lower layers in the model are dominated by lexical concepts (e.g., affixation) and linguistic ontologies (e.g. Word-Net), whereas the core-linguistic concepts (e.g., morphology, syntactic relations) are better represented in the middle and higher layers, iii) some encoded concepts are multi-faceted and cannot be adequately explained using the existing human-defined concepts.
null
null
10.18653/v1/2022.naacl-main.225
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,924
inproceedings
lewis-etal-2022-boosted
Boosted Dense Retriever
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.226/
Lewis, Patrick and Oguz, Barlas and Xiong, Wenhan and Petroni, Fabio and Yih, Scott and Riedel, Sebastian
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3102--3117
We propose DrBoost, a dense retrieval ensemble inspired by boosting. DrBoost is trained in stages: each component model is learned sequentially and specialized by focusing only on retrieval mistakes made by the current ensemble. The final representation is the concatenation of the output vectors of all the component models, making it a drop-in replacement for standard dense retrievers at test time. DrBoost enjoys several advantages compared to standard dense retrieval models. It produces representations which are 4x more compact, while delivering comparable retrieval results. It also performs surprisingly well under approximate search with coarse quantization, reducing latency and bandwidth needs by another 4x. In practice, this can make the difference between serving indices from disk versus from memory, paving the way for much cheaper deployments.
null
null
10.18653/v1/2022.naacl-main.226
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,925
inproceedings
zhang-etal-2022-mucgec
{M}u{CGEC}: a Multi-Reference Multi-Source Evaluation Dataset for {C}hinese Grammatical Error Correction
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.227/
Zhang, Yue and Li, Zhenghua and Bao, Zuyi and Li, Jiacheng and Zhang, Bo and Li, Chen and Huang, Fei and Zhang, Min
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3118--3130
This paper presents MuCGEC, a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction (CGEC), consisting of 7,063 sentences collected from three Chinese-as-a-Second-Language (CSL) learner sources. Each sentence is corrected by three annotators, and their corrections are carefully reviewed by a senior annotator, resulting in 2.3 references per sentence. We conduct experiments with two mainstream CGEC models, i.e., the sequence-to-sequence model and the sequence-to-edit model, both enhanced with large pretrained language models, achieving competitive benchmark performance on previous and our datasets. We also discuss CGEC evaluation methodologies, including the effect of multiple references and using a char-based metric. Our annotation guidelines, data, and code are available at \url{https://github.com/HillZhang1999/MuCGEC}.
null
null
10.18653/v1/2022.naacl-main.227
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,926
inproceedings
lee-etal-2022-neus
{N}eu{S}: Neutral Multi-News Summarization for Mitigating Framing Bias
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.228/
Lee, Nayeon and Bang, Yejin and Yu, Tiezheng and Madotto, Andrea and Fung, Pascale
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3131--3148
Media news framing bias can increase political polarization and undermine civil society. The need for automatic mitigation methods is therefore growing. We propose a new task, a neutral summary generation from multiple news articles of the varying political leaningsto facilitate balanced and unbiased news reading. In this paper, we first collect a new dataset, illustrate insights about framing bias through a case study, and propose a new effective metric and model (NeuS-Title) for the task. Based on our discovery that title provides a good signal for framing bias, we present NeuS-Title that learns to neutralize news content in hierarchical order from title to article. Our hierarchical multi-task learning is achieved by formatting our hierarchical data pair (title, article) sequentially with identifier-tokens ({\textquotedblleft}TITLE={\ensuremath{>}}{\textquotedblright}, {\textquotedblleft}ARTICLE={\ensuremath{>}}{\textquotedblright}) and fine-tuning the auto-regressive decoder with the standard negative log-likelihood objective. We then analyze and point out the remaining challenges and future directions. One of the most interesting observations is that neural NLG models can hallucinate not only factually inaccurate or unverifiable content but also politically biased content.
null
null
10.18653/v1/2022.naacl-main.228
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,927
inproceedings
inoue-etal-2022-enhance
Enhance Incomplete Utterance Restoration by Joint Learning Token Extraction and Text Generation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.229/
Inoue, Shumpei and Liu, Tsungwei and Nguyen, Son and Nguyen, Minh-Tien
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3149--3158
This paper introduces a model for incomplete utterance restoration (IUR) called JET (Joint learning token Extraction and Text generation). Different from prior studies that only work on extraction or abstraction datasets, we design a simple but effective model, working for both scenarios of IUR. Our design simulates the nature of IUR, where omitted tokens from the context contribute to restoration. From this, we construct a Picker that identifies the omitted tokens. To support the picker, we design two label creation methods (soft and hard labels), which can work in cases of no annotation data for the omitted tokens. The restoration is done by using a Generator with the help of the Picker on joint learning. Promising results on four benchmark datasets in extraction and abstraction scenarios show that our model is better than the pretrained T5 and non-generative language model methods in both rich and limited training data settings.
null
null
10.18653/v1/2022.naacl-main.229
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,928
inproceedings
bharadwaj-shevade-2022-efficient
Efficient Constituency Tree based Encoding for Natural Language to Bash Translation
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.230/
Bharadwaj, Shikhar and Shevade, Shirish
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3159--3168
Bash is a Unix command language used for interacting with the Operating System. Recent works on natural language to Bash translation have made significant advances, but none of the previous methods utilize the problem`s inherent structure. We identify this structure andpropose a Segmented Invocation Transformer (SIT) that utilizes the information from the constituency parse tree of the natural language text. Our method is motivated by the alignment between segments in the natural language text and Bash command components. Incorporating the structure in the modelling improves the performance of the model. Since such systems must be universally accessible, we benchmark the inference times on a CPU rather than a GPU. We observe a 1.8x improvement in the inference time and a 5x reduction in model parameters. Attribution analysis using Integrated Gradients reveals that the proposed method can capture the problem structure.
null
null
10.18653/v1/2022.naacl-main.230
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,929
inproceedings
lee-etal-2022-privacy
Privacy-Preserving Text Classification on {BERT} Embeddings with Homomorphic Encryption
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.231/
Lee, Garam and Kim, Minsoo and Park, Jai Hyun and Hwang, Seung-won and Cheon, Jung Hee
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3169--3175
Embeddings, which compress information in raw text into semantics-preserving low-dimensional vectors, have been widely adopted for their efficacy. However, recent research has shown that embeddings can potentially leak private information about sensitive attributes of the text, and in some cases, can be inverted to recover the original input text. To address these growing privacy challenges, we propose a privatization mechanism for embeddings based on homomorphic encryption, to prevent potential leakage of any piece of information in the process of text classification. In particular, our method performs text classification on the encryption of embeddings from state-of-the-art models like BERT, supported by an efficient GPU implementation of CKKS encryption scheme. We show that our method offers encrypted protection of BERT embeddings, while largely preserving their utility on downstream text classification tasks.
null
null
10.18653/v1/2022.naacl-main.231
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,930
inproceedings
wang-etal-2022-ita
{ITA}: Image-Text Alignments for Multi-Modal Named Entity Recognition
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.232/
Wang, Xinyu and Gui, Min and Jiang, Yong and Jia, Zixia and Bach, Nguyen and Wang, Tao and Huang, Zhongqiang and Tu, Kewei
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3176--3189
Recently, Multi-modal Named Entity Recognition (MNER) has attracted a lot of attention. Most of the work utilizes image information through region-level visual representations obtained from a pretrained object detector and relies on an attention mechanism to model the interactions between image and text representations. However, it is difficult to model such interactions as image and text representations are trained separately on the data of their respective modality and are not aligned in the same space. As text representations take the most important role in MNER, in this paper, we propose \textbf{I}mage-\textbf{t}ext \textbf{A}lignments (ITA) to align image features into the textual space, so that the attention mechanism in transformer-based pretrained textual embeddings can be better utilized. ITA first aligns the image into regional object tags, image-level captions and optical characters as visual contexts, concatenates them with the input texts as a new cross-modal input, and then feeds it into a pretrained textual embedding model. This makes it easier for the attention module of a pretrained textual embedding model to model the interaction between the two modalities since they are both represented in the textual space. ITA further aligns the output distributions predicted from the cross-modal input and textual input views so that the MNER model can be more practical in dealing with text-only inputs and robust to noises from images. In our experiments, we show that ITA models can achieve state-of-the-art accuracy on multi-modal Named Entity Recognition datasets, even without image information.
null
null
10.18653/v1/2022.naacl-main.232
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,931
inproceedings
tiktinsky-etal-2022-dataset
A Dataset for N-ary Relation Extraction of Drug Combinations
Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir
jul
2022
Seattle, United States
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-main.233/
Tiktinsky, Aryeh and Viswanathan, Vijay and Niezni, Danna and Meron Azagury, Dana and Shamay, Yosi and Taub-Tabib, Hillel and Hope, Tom and Goldberg, Yoav
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
3190--3203
Combination therapies have become the standard of care for diseases such as cancer, tuberculosis, malaria and HIV. However, the combinatorial set of available multi-drug treatments creates a challenge in identifying effective combination therapies available in a situation. To assist medical professionals in identifying beneficial drug-combinations, we construct an expert-annotated dataset for extracting information about the efficacy of drug combinations from the scientific literature. Beyond its practical utility, the dataset also presents a unique NLP challenge, as the first relation extraction dataset consisting of variable-length relations. Furthermore, the relations in this dataset predominantly require language understanding beyond the sentence level, adding to the challenge of this task. We provide a promising baseline model and identify clear areas for further improvement. We release our dataset (\url{https://huggingface.co/datasets/allenai/drug-combo-extraction}), code (\url{https://github.com/allenai/drug-combo-extraction}) and baseline models (\url{https://huggingface.co/allenai/drug-combo-classifier-pubmedbert-dapt}) publicly to encourage the NLP community to participate in this task.
null
null
10.18653/v1/2022.naacl-main.233
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
23,932