entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
mallinson-etal-2022-edit5
{E}di{T}5: Semi-Autoregressive Text Editing with T5 Warm-Start
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.156/
Mallinson, Jonathan and Adamek, Jakub and Malmi, Eric and Severyn, Aliaksei
Findings of the Association for Computational Linguistics: EMNLP 2022
2126--2138
We present EdiT5 - a novel semi-autoregressive text-editing approach designed to combine the strengths of non-autoregressive text-editing and autoregressive decoding. EdiT5 is faster at inference times than conventional sequence-to-sequence (seq2seq) models, while being capable of modeling flexible input-output transformations.This is achieved by decomposing the generation process into three sub-tasks: (1) tagging to decide on the subset of input tokens to be preserved in the output, (2) re-ordering to define their order in the output text, and (3) insertion to infill the missing tokens that are not present in the input. The tagging and re-ordering steps, which are responsible for generating the largest portion of the output, are non-autoregressive, while the insertion uses an autoregressive decoder.Depending on the task, EdiT5 requires significantly fewer autoregressive steps demonstrating speedups of up to 25x when compared to classic seq2seq models. Quality-wise, EdiT5 is initialized with a pre-trained T5 checkpoint yielding comparable performance to T5 in high-resource settings and clearly outperforms it on low-resource settings when evaluated on three NLG tasks: Sentence Fusion, Grammatical Error Correction, and Decontextualization.
null
null
10.18653/v1/2022.findings-emnlp.156
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,668
inproceedings
lahnala-etal-2022-critical
A Critical Reflection and Forward Perspective on Empathy and Natural Language Processing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.157/
Lahnala, Allison and Welch, Charles and Jurgens, David and Flek, Lucie
Findings of the Association for Computational Linguistics: EMNLP 2022
2139--2158
We review the state of research on empathy in natural language processing and identify the following issues: (1) empathy definitions are absent or abstract, which (2) leads to low construct validity and reproducibility. Moreover, (3) emotional empathy is overemphasized, skewing our focus to a narrow subset of simplified tasks. We believe these issues hinder research progress and argue that current directions will benefit from a clear conceptualization that includes operationalizing cognitive empathy components. Our main objectives are to provide insight and guidance on empathy conceptualization for NLP research objectives and to encourage researchers to pursue the overlooked opportunities in this area, highly relevant, e.g., for clinical and educational sectors.
null
null
10.18653/v1/2022.findings-emnlp.157
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,669
inproceedings
liu-etal-2022-neural
A Neural-Symbolic Approach to Natural Language Understanding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.158/
Liu, Zhixuan and Wang, Zihao and Lin, Yuan and Li, Hang
Findings of the Association for Computational Linguistics: EMNLP 2022
2159--2172
Deep neural networks, empowered by pre-trained language models, have achieved remarkable results in natural language understanding (NLU) tasks. However, their performances can drastically deteriorate when logical reasoning is needed. This is because NLU in principle depends on not only analogical reasoning, which deep neural networks are good at, but also logical reasoning. According to the dual-process theory, analogical reasoning and logical reasoning are respectively carried out by System 1 and System 2 in the human brain. Inspired by the theory, we present a novel framework for NLU called Neural-Symbolic Processor (NSP), which performs analogical reasoning based on neural processing and logical reasoning based on both neural and symbolic processing. As a case study, we conduct experiments on two NLU tasks, question answering (QA) and natural language inference (NLI), when numerical reasoning (a type of logical reasoning) is necessary. The experimental results show that our method significantly outperforms state-of-the-art methods in both tasks.
null
null
10.18653/v1/2022.findings-emnlp.158
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,670
inproceedings
ouyang-etal-2022-social
Social-aware Sparse Attention Network for Session-based Social Recommendation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.159/
Ouyang, Kai and Xu, Xianghong and Tang, Chen and Chen, Wang and Zheng, Haitao
Findings of the Association for Computational Linguistics: EMNLP 2022
2173--2183
Session-based Social Recommendation (SSR) aims to use users' social networks and historical sessions to provide more personalized recommendations for the current session.Unfortunately, existing SSR methods have two limitations.First, they do not screen users' useless social relationships and noisy irrelevant interactions.However, user preferences are mainly affected by several close friends and key interactions.Second, when modeling the current session, they do not take full advantage of user preference information.To tackle these issues, we propose a novel Social-aware Sparse Attention Network for SSR, abbreviated as SSAN.It mainly consists of the Heterogeneous Graph Embedding (HGE) module and the Social-aware Encoder-decoder Network (SEN) module.In the HGE module, we adopt a modified heterogeneous graph neural network, which focuses more on close friends and key historical interactions, to enhance user/item representations. In the SEN module, we use the user representation as a bridge between the Encoder and Decoder to incorporate user preferences when modeling the current session.Extensive experiments on two benchmark datasets demonstrate the superiority of SSAN over the state-of-the-art models.
null
null
10.18653/v1/2022.findings-emnlp.159
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,671
inproceedings
he-etal-2022-sparseadapter
{S}parse{A}dapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.160/
He, Shwai and Ding, Liang and Dong, Daize and Zhang, Jeremy and Tao, Dacheng
Findings of the Association for Computational Linguistics: EMNLP 2022
2184--2190
Adapter Tuning, which freezes the pretrained language models (PLMs) and only fine-tunes a few extra modules, becomes an appealing efficient alternative to the full model fine-tuning. Although computationally efficient, the recent Adapters often increase parameters (e.g. bottleneck dimension) for matching the performance of full model fine-tuning, which we argue goes against their original intention. In this work, we re-examine the parameter-efficiency of Adapter through the lens of network pruning (we name such plug-in concept as SparseAdapter) and find that SparseAdapter can achieve comparable or better performance than standard Adapters when the sparse ratio reaches up to 80{\%}. Based on our findings, we introduce an easy but effective setting {\textquotedblleft}Large-Sparse{\textquotedblright} to improve the model capacity of Adapters under the same parameter budget. Experiments on five competitive Adapters upon three advanced PLMs show that with proper sparse method (e.g. SNIP) and ratio (e.g. 40{\%}) SparseAdapter can consistently outperform their corresponding counterpart. Encouragingly, with the Large-Sparse setting, we can obtain further appealing gains, even outperforming the full fine-tuning by a large margin.
null
null
10.18653/v1/2022.findings-emnlp.160
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,672
inproceedings
gopfert-etal-2022-measurement
Measurement Extraction with Natural Language Processing: A Review
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.161/
G{\"opfert, Jan and Kuckertz, Patrick and Weinand, Jann and Kotzur, Leander and Stolten, Detlef
Findings of the Association for Computational Linguistics: EMNLP 2022
2191--2215
Quantitative data is important in many domains. Information extraction methods draw structured data from documents. However, the extraction of quantities and their contexts has received little attention in the history of information extraction. In this review, an overview of prior work on measurement extraction is presented. We describe different approaches to measurement extraction and outline the challenges posed by this task. The review concludes with an outline of potential future research. Research strains in measurement extraction tend to be isolated and lack a common terminology. Improvements in numerical reasoning, more extensive datasets, and the consideration of wider contexts may lead to significant improvements in measurement extraction.
null
null
10.18653/v1/2022.findings-emnlp.161
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,673
inproceedings
gao-etal-2022-summarizing-procedural
Summarizing Procedural Text: Data and Approach
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.162/
Gao, Shen and Zhang, Haotong and Chen, Xiuying and Yan, Rui and Zhao, Dongyan
Findings of the Association for Computational Linguistics: EMNLP 2022
2216--2225
Procedural text is a widely used genre that contains many steps of instructions of how to cook a dish or how to conduct a chemical experiment and analyze the procedural text has become a popular task in the NLP field. Since the procedural text can be very long and contains many details, summarizing the whole procedural text or giving an overview for each complicated procedure step can save time for readers and help them to capture the core information in the text. In this paper, we propose the procedural text summarization task with two summarization granularity: step-view and global-view, which summarizes each step in the procedural text separately or gives an overall summary for all steps respectively. To tackle this task, we propose an Entity-State Graph-based Summarizer (ESGS) which is based on state-of-the-art entity state tracking methods and constructs a heterogeneous graph to aggregate contextual information for each procedure. In order to help the summarization model focus on the salient entities, we propose to use the contextualized procedure graph representation to predict the salient entities. Experiments conducted on two datasets verify the effectiveness of our proposed model. Our code and datasets will be released on https://github.com/gsh199449/procedural-summ.
null
null
10.18653/v1/2022.findings-emnlp.162
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,674
inproceedings
cheng-etal-2022-snapshot
Snapshot-Guided Domain Adaptation for {ELECTRA}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.163/
Cheng, Daixuan and Huang, Shaohan and Liu, Jianfeng and Zhan, Yuefeng and Sun, Hao and Wei, Furu and Deng, Denvy and Zhang, Qi
Findings of the Association for Computational Linguistics: EMNLP 2022
2226--2232
Discriminative pre-trained language models, such as ELECTRA, have achieved promising performances in a variety of general tasks. However, these generic pre-trained models struggle to capture domain-specific knowledge of domain-related tasks. In this work, we propose a novel domain-adaptation method for ELECTRA, which can dynamically select domain-specific tokens and guide the discriminator to emphasize them, without introducing new training parameters. We show that by re-weighting the losses of domain-specific tokens, ELECTRA can be effectively adapted to different domains. The experimental results in both computer science and biomedical domains show that the proposed method can achieve state-of-the-art results on the domain-related tasks.
null
null
10.18653/v1/2022.findings-emnlp.163
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,675
inproceedings
muangkammuen-etal-2022-exploiting
Exploiting Labeled and Unlabeled Data via Transformer Fine-tuning for Peer-Review Score Prediction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.164/
Muangkammuen, Panitan and Fukumoto, Fumiyo and Li, Jiyi and Suzuki, Yoshimi
Findings of the Association for Computational Linguistics: EMNLP 2022
2233--2240
Automatic Peer-review Aspect Score Prediction (PASP) of academic papers can be a helpful assistant tool for both reviewers and authors. Most existing works on PASP utilize supervised learning techniques. However, the limited number of peer-review data deteriorates the performance of PASP. This paper presents a novel semi-supervised learning (SSL) method that incorporates the Transformer fine-tuning into the {\ensuremath{\Gamma}}-model, a variant of the Ladder network, to leverage contextual features from unlabeled data. Backpropagation simultaneously minimizes the sum of supervised and unsupervised cost functions, avoiding the need for layer-wise pre-training. The experimental results show that our model outperforms the supervised and naive semi-supervised learning baselines. Our source codes are available online.
null
null
10.18653/v1/2022.findings-emnlp.164
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,676
inproceedings
ilan-vilenchik-2022-harald
{HARALD}: Augmenting Hate Speech Data Sets with Real Data
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.165/
Ilan, Tal and Vilenchik, Dan
Findings of the Association for Computational Linguistics: EMNLP 2022
2241--2248
The successful completion of the hate speech detection task hinges upon the availability of rich and variable labeled data, which is hard to obtain. In this work, we present a new approach for data augmentation that uses as input real unlabelled data, which is carefully selected from online platforms where invited hate speech is abundant. We show that by harvesting and processing this data (in an automatic manner), one can augment existing manually-labeled datasets to improve the classification performance of hate speech classification models. We observed an improvement in F1-score ranging from 2.7{\%} and up to 9.5{\%}, depending on the task (in- or cross-domain) and the model used.
null
null
10.18653/v1/2022.findings-emnlp.165
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,677
inproceedings
zhang-etal-2022-wait
Wait-info Policy: Balancing Source and Target at Information Level for Simultaneous Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.166/
Zhang, Shaolei and Guo, Shoutao and Feng, Yang
Findings of the Association for Computational Linguistics: EMNLP 2022
2249--2263
Simultaneous machine translation (SiMT) outputs the translation while receiving the source inputs, and hence needs to balance the received source information and translated target information to make a reasonable decision between waiting for inputs or outputting translation. Previous methods always balance source and target information at the token level, either directly waiting for a fixed number of tokens or adjusting the waiting based on the current token. In this paper, we propose a Wait-info Policy to balance source and target at the information level. We first quantify the amount of information contained in each token, named info. Then during simultaneous translation, the decision of waiting or outputting is made based on the comparison results between the total info of previous target outputs and received source inputs. Experiments show that our method outperforms strong baselines under and achieves better balance via the proposed info.
null
null
10.18653/v1/2022.findings-emnlp.166
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,678
inproceedings
guo-etal-2022-turning
Turning Fixed to Adaptive: Integrating Post-Evaluation into Simultaneous Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.167/
Guo, Shoutao and Zhang, Shaolei and Feng, Yang
Findings of the Association for Computational Linguistics: EMNLP 2022
2264--2278
Simultaneous machine translation (SiMT) starts its translation before reading the whole source sentence and employs either fixed or adaptive policy to generate the target sentence. Compared to the fixed policy, the adaptive policy achieves better latency-quality tradeoffs by adopting a flexible translation policy. If the policy can evaluate rationality before taking action, the probability of incorrect actions will also decrease. However, previous methods lack evaluation of actions before taking them. In this paper, we propose a method of performing the adaptive policy via integrating post-evaluation into the fixed policy. Specifically, whenever a candidate token is generated, our model will evaluate the rationality of the next action by measuring the change in the source content. Our model will then take different actions based on the evaluation results. Experiments on three translation tasks show that our method can exceed strong baselines under all latency.
null
null
10.18653/v1/2022.findings-emnlp.167
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,679
inproceedings
li-etal-2022-alleviating
Alleviating Sparsity of Open Knowledge Graphs with Ternary Contrastive Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.168/
Li, Qian and Joty, Shafiq and Wang, Daling and Feng, Shi and Zhang, Yifei
Findings of the Association for Computational Linguistics: EMNLP 2022
2279--2291
Sparsity of formal knowledge and roughness of non-ontological construction make sparsity problem particularly prominent in Open Knowledge Graphs (OpenKGs). Due to sparse links, learning effective representation for few-shot entities becomes difficult. We hypothesize that by introducing negative samples, a contrastive learning (CL) formulation could be beneficial in such scenarios. However, existing CL methods model KG triplets as binary objects of entities ignoring the relation-guided ternary propagation patterns and they are too generic, i.e., they ignore zero-shot, few-shot and synonymity problems that appear in OpenKGs. To address this, we propose TernaryCL, a CL framework based on ternary propagation patterns among head, relation and tail. TernaryCL designs Contrastive Entity and Contrastive Relation to mine ternary discriminative features with both negative entities and relations, introduces Contrastive Self to help zero- and few-shot entities learn discriminative features, Contrastive Synonym to model synonymous entities, and Contrastive Fusion to aggregate graph features from multiple paths. Extensive experiments on benchmarks demonstrate the superiority of TernaryCL over state-of-the-art models.
null
null
10.18653/v1/2022.findings-emnlp.168
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,680
inproceedings
panthaplackel-etal-2022-using
Using Developer Discussions to Guide Fixing Bugs in Software
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.169/
Panthaplackel, Sheena and Gligoric, Milos and Li, Junyi Jessy and Mooney, Raymond
Findings of the Association for Computational Linguistics: EMNLP 2022
2292--2301
Automatically fixing software bugs is a challenging task. While recent work showed that natural language context is useful in guiding bug-fixing models, the approach required prompting developers to provide this context, which was simulated through commit messages written after the bug-fixing code changes were made. We instead propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for any additional information from developers. For this, we augment standard bug-fixing datasets with bug report discussions. Using these newly compiled datasets, we demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.
null
null
10.18653/v1/2022.findings-emnlp.169
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,681
inproceedings
wen-etal-2022-autocad
{A}uto{CAD}: Automatically Generate Counterfactuals for Mitigating Shortcut Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.170/
Wen, Jiaxin and Zhu, Yeshuang and Zhang, Jinchao and Zhou, Jie and Huang, Minlie
Findings of the Association for Computational Linguistics: EMNLP 2022
2302--2317
Recent studies have shown the impressive efficacy of counterfactually augmented data (CAD) for reducing NLU models' reliance on spurious features and improving their generalizability. However, current methods still heavily rely on human efforts or task-specific designs to generate counterfactuals, thereby impeding CAD`s applicability to a broad range of NLU tasks. In this paper, we present AutoCAD, a fully automatic and task-agnostic CAD generation framework. AutoCAD first leverages a classifier to unsupervisedly identify rationales as spans to be intervened, which disentangles spurious and causal features. Then, AutoCAD performs controllable generation enhanced by unlikelihood training to produce diverse counterfactuals. Extensive evaluations on multiple out-of-domain and challenge benchmarks demonstrate that AutoCAD consistently and significantly boosts the out-of-distribution performance of powerful pre-trained models across different NLU tasks, which is comparable or even better than previous state-of-the-art human-in-the-loop or task-specific CAD methods.
null
null
10.18653/v1/2022.findings-emnlp.170
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,682
inproceedings
li-etal-2022-multi-modal
A Multi-Modal Knowledge Graph for Classical {C}hinese Poetry
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.171/
Li, Yuqing and Zhang, Yuxin and Wu, Bin and Wen, Ji-Rong and Song, Ruihua and Bai, Ting
Findings of the Association for Computational Linguistics: EMNLP 2022
2318--2326
Classical Chinese poetry has a long history and is a precious cultural heritage of humankind. Displaying the classical Chinese poetry in a visual way, helps to cross cultural barriers in different countries, making it enjoyable for all the people. In this paper, we construct a multi-modal knowledge graph for classical Chinese poetry (PKG), in which the visual information of words in the poetry are incorporated. Then a multi-modal pre-training language model, PKG-Bert, is proposed to obtain the poetry representation with visual information, which bridges the semantic gap between different modalities. PKG-Bert achieves the state-of-the-art performance on the poetry-image retrieval task, showing the effectiveness of incorporating the multi-modal knowledge. The large-scale multi-modal knowledge graph of classical Chinese poetry will be released to promote the researches in classical Chinese culture area.
null
null
10.18653/v1/2022.findings-emnlp.171
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,683
inproceedings
tse-etal-2022-assessing
Assessing Non-autoregressive Alignment in Neural Machine Translation via Word Reordering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.172/
Tse, Chun-Hin and Leung, Ester and Cheung, William K.
Findings of the Association for Computational Linguistics: EMNLP 2022
2327--2333
Recent work on non-autoregressive neural machine translation (NAT) that leverages alignment information to explicitly reduce the modality of target distribution has reported comparable performance with counterparts that tackle multi-modality problem by implicitly modeling dependencies. Effectiveness in handling alignment is vital for models that follow this approach, where a token reordering mechanism is typically involved and plays a vital role. We review the reordering capability of the respective mechanisms in recent NAT models, and our experimental results show that their performance is sub-optimal. We propose to learn a non-autoregressive language model (NALM) based on transformer which can be combined with Viterbi decoding to achieve better reordering performance. We evaluate the proposed NALM using the PTB dataset where sentences with words permuted in different ways are expected to have their ordering recovered. Our empirical results show that the proposed method can outperform the state-of-the-art reordering mechanisms under different word permutation settings, with a 2-27 BLEU improvement, suggesting high potential for word alignment in NAT.
null
null
10.18653/v1/2022.findings-emnlp.172
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,684
inproceedings
hou-etal-2022-syntax
Syntax-guided Localized Self-attention by Constituency Syntactic Distance
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.173/
Hou, Shengyuan and Kai, Jushi and Xue, Haotian and Zhu, Bingyu and Yuan, Bo and Huang, Longtao and Wang, Xinbing and Lin, Zhouhan
Findings of the Association for Computational Linguistics: EMNLP 2022
2334--2341
Recent works have revealed that Transformers are implicitly learning the syntactic information in its lower layers from data, albeit is highly dependent on the quality and scale of the training data. However, learning syntactic information from data is not necessary if we can leverage an external syntactic parser, which provides better parsing quality with well-defined syntactic structures. This could potentially improve Transformer`s performance and sample efficiency. In this work, we propose a syntax-guided localized self-attention for Transformer that allows directly incorporating grammar structures from an external constituency parser. It prohibits the attention mechanism to overweight the grammatically distant tokens over close ones. Experimental results show that our model could consistently improve translation performance on a variety of machine translation datasets, ranging from small to large dataset sizes, and with different source languages.
null
null
10.18653/v1/2022.findings-emnlp.173
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,685
inproceedings
cui-etal-2022-codeexp
{C}ode{E}xp: Explanatory Code Document Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.174/
Cui, Haotian and Wang, Chenglong and Huang, Junjie and Inala, Jeevana Priya and Mytkowicz, Todd and Wang, Bo and Gao, Jianfeng and Duan, Nan
Findings of the Association for Computational Linguistics: EMNLP 2022
2342--2354
Developing models that can automatically generate detailed code explanation can greatly benefit software maintenance and programming education. However, existing code-to-text generation models often produce only high-level summaries of code that do not capture implementation-level choices essential for these scenarios. To fill in this gap, we propose the code explanation generation task. We first conducted a human study to identify the criteria for high-quality explanatory docstring for code. Based on that, we collected and refined a large-scale code docstring corpus and formulated automatic evaluation metrics that best match human assessments. Finally, we present a multi-stage fine-tuning strategy and baseline models for the task. Our experiments show that (1) our refined training dataset lets models achieve better performance in the explanation generation tasks compared to larger-scale unrefined data (15x larger), and (2) fine-tuned models can generate well-structured long docstrings comparable to human-written ones. We envision our training dataset, human-evaluation protocol, recommended metrics, and fine-tuning strategy can boost future code explanation research. The code and annotated data are available at https://github.com/subercui/CodeExp.
null
null
10.18653/v1/2022.findings-emnlp.174
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,686
inproceedings
bakshandaeva-etal-2022-pauq
{PAUQ}: Text-to-{SQL} in {R}ussian
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.175/
Bakshandaeva, Daria and Somov, Oleg and Dmitrieva, Ekaterina and Davydova, Vera and Tutubalina, Elena
Findings of the Association for Computational Linguistics: EMNLP 2022
2355--2376
Semantic parsing is an important task that allows to democratize human-computer interaction. One of the most popular text-to-SQL datasets with complex and diverse natural language (NL) questions and SQL queries is Spider. We construct and complement a Spider dataset for Russian, thus creating the first publicly available text-to-SQL dataset for this language. While examining its components - NL questions, SQL queries and databases content - we identify limitations of the existing database structure, fill out missing values for tables and add new requests for underrepresented categories. We select thirty functional test sets with different features that can be used for the evaluation of neural models' abilities. To conduct the experiments, we adapt baseline architectures RAT-SQL and BRIDGE and provide in-depth query component analysis. On the target language, both models demonstrate strong results with monolingual training and improved accuracy in multilingual scenario. In this paper, we also study trade-offs between machine-translated and manually-created NL queries. At present, Russian text-to-SQL is lacking in datasets as well as trained models, and we view this work as an important step towards filling this gap.
null
null
10.18653/v1/2022.findings-emnlp.175
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,687
inproceedings
lu-etal-2022-event
Event-Centric Question Answering via Contrastive Learning and Invertible Event Transformation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.176/
Lu, Junru and Tan, Xingwei and Pergola, Gabriele and Gui, Lin and He, Yulan
Findings of the Association for Computational Linguistics: EMNLP 2022
2377--2389
Human reading comprehension often requires reasoning of event semantic relations in narratives, represented by Event-centric Question-Answering (QA). To address event-centric QA, we propose a novel QA model with contrastive learning and invertible event transformation, call TranCLR. Our proposed model utilizes an invertible transformation matrix to project semantic vectors of events into a common event embedding space, trained with contrastive learning, and thus naturally inject event semantic knowledge into mainstream QA pipelines. The transformation matrix is fine-tuned with the annotated event relation types between events that occurred in questions and those in answers, using event-aware question vectors. Experimental results on the Event Semantic Relation Reasoning (ESTER) dataset show significant improvements in both generative and extractive settings compared to the existing strong baselines, achieving over 8.4{\%} gain in the token-level F1 score and 3.0{\%} gain in Exact Match (EM) score under the multi-answer setting. Qualitative analysis reveals the high quality of the generated answers by TranCLR, demonstrating the feasibility of injecting event knowledge into QA model learning. Our code and models can be found at https://github.com/LuJunru/TranCLR.
null
null
10.18653/v1/2022.findings-emnlp.176
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,688
inproceedings
zhao-etal-2022-label
Label-Driven Denoising Framework for Multi-Label Few-Shot Aspect Category Detection
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.177/
Zhao, Fei and Shen, Yuchen and Wu, Zhen and Dai, Xinyu
Findings of the Association for Computational Linguistics: EMNLP 2022
2390--2402
Multi-Label Few-Shot Aspect Category Detection (FS-ACD) is a new sub-task of aspect-based sentiment analysis, which aims to detect aspect categories accurately with limited training instances. Recently, dominant works use the prototypical network to accomplish this task, and employ the attention mechanism to extract keywords of aspect category from the sentences to produce the prototype for each aspect. However, they still suffer from serious noise problems: (1) due to lack of sufficient supervised data, the previous methods easily catch noisy words irrelevant to the current aspect category, which largely affects the quality of the generated prototype; (2) the semantically-close aspect categories usually generate similar prototypes, which are mutually noisy and confuse the classifier seriously. In this paper, we resort to the label information of each aspect to tackle the above problems, along with proposing a novel Label-Driven Denoising Framework (LDF). Extensive experimental results show that our framework achieves better performance than other state-of-the-art methods.
null
null
10.18653/v1/2022.findings-emnlp.177
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,689
inproceedings
sun-etal-2022-visual
Visual Named Entity Linking: A New Dataset and A Baseline
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.178/
Sun, Wen and Fan, Yixing and Guo, Jiafeng and Zhang, Ruqing and Cheng, Xueqi
Findings of the Association for Computational Linguistics: EMNLP 2022
2403--2415
Visual Entity Linking (VEL) is a task to link regions of images with their corresponding entities in Knowledge Bases (KBs), which is beneficial for many computer vision tasks such as image retrieval, image caption, and visual question answering. While existing tasks in VEL either rely on textual data to complement a multi-modal linking or only link objects with general entities, which fails to perform named entity linking on large amounts of image data. In this paper, we consider a purely Visual-based Named Entity Linking (VNEL) task, where the input only consists of an image. The task is to identify objects of interest (i.e., visual entity mentions) in images and link them to corresponding named entities in KBs. Since each entity often contains rich visual and textual information in KBs, we thus propose three different sub-tasks, i.e., visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL). In addition, we present a high-quality human-annotated visual person linking dataset, named WIKIPerson. Based on WIKIPerson, we establish a series of baseline algorithms for the solution of each sub-task, and conduct experiments to verify the quality of the proposed datasets and the effectiveness of baseline methods. We envision this work to be helpful for soliciting more works regarding VNEL in the future. The codes and datasets are publicly available at https: //github.com/ict-bigdatalab/VNEL.
null
null
10.18653/v1/2022.findings-emnlp.178
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,690
inproceedings
eichenberg-etal-2022-magma
{MAGMA} {--} Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.179/
Eichenberg, Constantin and Black, Sidney and Weinbach, Samuel and Parcalabescu, Letitia and Frank, Anette
Findings of the Association for Computational Linguistics: EMNLP 2022
2416--2428
Large-scale pretraining is fast becoming the norm in Vision-Language (VL) modeling. However, prevailing VL approaches are limited by the requirement for labeled data and the use of complex multi-step pretraining objectives. We present MAGMA - a simple method for augmenting generative language models with additional modalities using adapter-based finetuning. Building on Frozen, we train a series of VL models that autoregressively generate text from arbitrary combinations of visual and textual input. The pretraining is entirely end-to-end using a single language modeling objective, simplifying optimization compared to previous approaches. Importantly, the language model weights remain unchanged during training, allowing for transfer of encyclopedic knowledge and in-context learning abilities from language pretraining. MAGMA outperforms Frozen on open-ended generative tasks, achieving state of the art results on the OKVQA benchmark and competitive results on a range of other popular VL benchmarks, while pretraining on 0.2 {\%} of the number of samples used to train SimVLM.
null
null
10.18653/v1/2022.findings-emnlp.179
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,691
inproceedings
akyurek-etal-2022-towards
Towards Tracing Knowledge in Language Models Back to the Training Data
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.180/
Akyurek, Ekin and Bolukbasi, Tolga and Liu, Frederick and Xiong, Binbin and Tenney, Ian and Andreas, Jacob and Guu, Kelvin
Findings of the Association for Computational Linguistics: EMNLP 2022
2429--2446
Language models (LMs) have been shown to memorize a great deal of factual knowledge contained in their training data. But when an LM generates an assertion, it is often difficult to determine where it learned this information and whether it is true. In this paper, we propose the problem of fact tracing: identifying which training examples taught an LM to generate a particular factual assertion. Prior work on training data attribution (TDA) may offer effective tools for identifying such examples, known as {\textquotedblleft}proponents{\textquotedblright}. We present the first quantitative benchmark to evaluate this. We compare two popular families of TDA methods {---} gradient-based and embedding-based {---} and find that much headroom remains. For example, both methods have lower proponent-retrieval precision than an information retrieval baseline (BM25) that does not have access to the LM at all. We identify key challenges that may be necessary for further improvement such as overcoming the problem of gradient saturation, and also show how several nuanced implementation details of existing neural TDA methods can significantly improve overall fact tracing performance.
null
null
10.18653/v1/2022.findings-emnlp.180
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,692
inproceedings
mavromatis-karypis-2022-rearev
{R}ea{R}ev: Adaptive Reasoning for Question Answering over Knowledge Graphs
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.181/
Mavromatis, Costas and Karypis, George
Findings of the Association for Computational Linguistics: EMNLP 2022
2447--2458
Knowledge Graph Question Answering (KGQA) involves retrieving entities as answers from a Knowledge Graph (KG) using natural language queries. The challenge is to learn to reason over question-relevant KG facts that traverse KG entities and lead to the question answers. To facilitate reasoning, the question is decoded into instructions, which are dense question representations used to guide the KG traversals. However, if the derived instructions do not exactly match the underlying KG information, they may lead to reasoning under irrelevant context.Our method, termed ReaRev, introduces a new way to KGQA reasoning with respectto both instruction decoding and execution. To improve instruction decoding, we perform reasoning in an adaptive manner, where KG-aware information is used to iteratively update the initial instructions. To improve instruction execution, we emulate breadth-first search (BFS) with graph neural networks (GNNs). The BFS strategy treats the instructions as a set and allows our method to decide on their execution order on the fly. Experimental results on three KGQA benchmarks demonstrate the ReaRev`s effectiveness compared with previous state-of-the-art, especially when the KG is incomplete or when we tackle complex questions. Our code is publicly available at https://github.com/cmavro/ReaRev{\_}KGQA.
null
null
10.18653/v1/2022.findings-emnlp.181
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,693
inproceedings
xu-etal-2022-understanding
Understanding Social Media Cross-Modality Discourse in Linguistic Space
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.182/
Xu, Chunpu and Tan, Hanzhuo and Li, Jing and Li, Piji
Findings of the Association for Computational Linguistics: EMNLP 2022
2459--2471
The multimedia communications with texts and images are popular on social media. However, limited studies concern how images are structured with texts to form coherent meanings in human cognition. To fill in the gap, we present a novel concept of cross-modality discourse, reflecting how human readers couple image and text understandings. Text descriptions are first derived from images (named as subtitles) in the multimedia contexts. Five labels {--} entity-level insertion, projection and concretization and scene-level restatement and extension {---} are further employed to shape the structure of subtitles and texts and present their joint meanings. As a pilot study, we also build the very first dataset containing over 16K multimedia tweets with manually annotated discourse labels. The experimental results show that trendy multimedia encoders based on multi-head attention (with captions) are unable to well understand cross-modality discourse and additionally modeling texts at the output layer helps yield the-state-of-the-art results.
null
null
10.18653/v1/2022.findings-emnlp.182
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,694
inproceedings
taktasheva-etal-2022-tape
{TAPE}: Assessing Few-shot {R}ussian Language Understanding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.183/
Taktasheva, Ekaterina and Fenogenova, Alena and Shevelev, Denis and Katricheva, Nadezhda and Tikhonova, Maria and Akhmetgareeva, Albina and Zinkevich, Oleg and Bashmakova, Anastasiia and Iordanskaia, Svetlana and Kurenshchikova, Valentina and Spiridonova, Alena and Artemova, Ekaterina and Shavrina, Tatiana and Mikhailov, Vladislav
Findings of the Association for Computational Linguistics: EMNLP 2022
2472--2497
Recent advances in zero-shot and few-shot learning have shown promise for a scope of research and practical purposes. However, this fast-growing area lacks standardized evaluation suites for non-English languages, hindering progress outside the Anglo-centric paradigm. To address this line of research, we propose TAPE (Text Attack and Perturbation Evaluation), a novel benchmark that includes six more complex NLU tasks for Russian, covering multi-hop reasoning, ethical concepts, logic and commonsense knowledge. The TAPE`s design focuses on systematic zero-shot and few-shot NLU evaluation: (i) linguistic-oriented adversarial attacks and perturbations for analyzing robustness, and (ii) subpopulations for nuanced interpretation. The detailed analysis of testing the autoregressive baselines indicates that simple spelling-based perturbations affect the performance the most, while paraphrasing the input has a more negligible effect. At the same time, the results demonstrate a significant gap between the neural and human baselines for most tasks. We publicly release TAPE (https://tape-benchmark.com) to foster research on robust LMs that can generalize to new tasks when little to no supervision is available.
null
null
10.18653/v1/2022.findings-emnlp.183
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,695
inproceedings
li-etal-2022-hierarchical-n
A Hierarchical N-Gram Framework for Zero-Shot Link Prediction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.184/
Li, Mingchen and Chen, Junfan and Mensah, Samuel and Aletras, Nikolaos and Yang, Xiulong and Ye, Yang
Findings of the Association for Computational Linguistics: EMNLP 2022
2498--2509
Knowledge graphs typically contain a large number of entities but often cover only a fraction of all relations between them (i.e., incompleteness). Zero-shot link prediction (ZSLP) is a popular way to tackle the problem by automatically identifying unobserved relations between entities. Most recent approaches use textual features of relations (e.g., surface name or textual descriptions) as auxiliary information to improve the encoded representation. These methods lack robustness as they are bound to support only tokens from a fixed vocabulary and unable to model out-of-vocabulary (OOV) words. Subword units such as character n-grams have the capability of generating more expressive representations for OOV words. Hence, in this paper, we propose a \textbf{H}ierarchical \textbf{N}-gram framework for \textbf{Z}ero-\textbf{S}hot \textbf{L}ink \textbf{P}rediction (HNZSLP) that leverages character n-gram information for ZSLP. Our approach works by first constructing a hierarchical n-gram graph from the surface name of relations. Subsequently, a new Transformer-based network models the hierarchical n-gram graph to learn a relation embedding for ZSLP. Experimental results show that our proposed HNZSLP method achieves state-of-the-art performance on two standard ZSLP datasets.
null
null
10.18653/v1/2022.findings-emnlp.184
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,696
inproceedings
park-etal-2022-quadapter
Quadapter: Adapter for {GPT}-2 Quantization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.185/
Park, Minseop and You, Jaeseong and Nagel, Markus and Chang, Simyung
Findings of the Association for Computational Linguistics: EMNLP 2022
2510--2517
Transformer language models such as GPT-2 are difficult to quantize because of outliers in the activations leading to a large quantization error. To adapt to the error, one must use quantization-aware training, which entails a fine-tuning process based on the dataset and the training pipeline identical to those for the original model. Pretrained language models, however, often do not grant access to their datasets and training pipelines, forcing us to rely on arbitrary ones for fine-tuning. In that case, it is observed that quantization-aware training overfits the model to the fine-tuning data. To this end introduced is a quantization adapter (Quadapter), a small set of parameters that are learned to make activations quantization-friendly by scaling them channel-wise.For quantization without overfitting, we introduce a quantization adapter (Quadapter), a small set of parameters that are learned to make activations quantization-friendly by scaling them channel-wise. It keeps the model parameters unchanged. By applying our method to the challenging task of quantizing GPT-2, we demonstrate that it effectively prevents the overfitting and improves the quantization performance.
null
null
10.18653/v1/2022.findings-emnlp.185
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,697
inproceedings
ekram-etal-2022-banglarqa
{B}angla{RQA}: A Benchmark Dataset for Under-resourced {B}angla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.186/
Ekram, Syed Mohammed Sartaj and Rahman, Adham Arik and Altaf, Md. Sajid and Islam, Mohammed Saidul and Rahman, Mehrab Mustafy and Rahman, Md Mezbaur and Hossain, Md Azam and Kamal, Abu Raihan Mostofa
Findings of the Association for Computational Linguistics: EMNLP 2022
2518--2532
High-resource languages, such as English, have access to a plethora of datasets with various question-answer types resembling real-world reading comprehension. However, there is a severe lack of diverse and comprehensive question-answering datasets in under-resourced languages like Bangla. The ones available are either translated versions of English datasets with a niche answer format or created by human annotations focusing on a specific domain, question type, or answer type. To address these limitations, this paper introduces BanglaRQA, a reading comprehension-based Bangla question-answering dataset with various question-answer types. BanglaRQA consists of 3,000 context passages and 14,889 question-answer pairs created from those passages. The dataset comprises answerable and unanswerable questions covering four unique categories of questions and three types of answers. In addition, this paper also implemented four different Transformer models for question-answering on the proposed dataset. The best-performing model achieved an overall 62.42{\%} EM and 78.11{\%} F1 score. However, detailed analyses showed that the performance varies across question-answer types, leaving room for substantial improvement of the model performance. Furthermore, we demonstrated the effectiveness of BanglaRQA as a training resource by showing strong results on the bn{\_}squad dataset. Therefore, BanglaRQA has the potential to contribute to the advancement of future research by enhancing the capability of language models. The dataset and codes are available at https://github.com/sartajekram419/BanglaRQA
null
null
10.18653/v1/2022.findings-emnlp.186
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,698
inproceedings
shao-etal-2022-chaining
Chaining Simultaneous Thoughts for Numerical Reasoning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.187/
Shao, Zhihong and Huang, Fei and Huang, Minlie
Findings of the Association for Computational Linguistics: EMNLP 2022
2533--2547
Given that rich information is hidden behind ubiquitous numbers in text, numerical reasoning over text should be an essential skill of AI systems. To derive precise equations to solve numerical reasoning problems, previous work focused on modeling the structures of equations, and has proposed various structured decoders. Though structure modeling proves to be effective, these structured decoders construct a single equation in a pre-defined autoregressive order, potentially placing an unnecessary restriction on how a model should grasp the reasoning process. Intuitively, humans may have numerous pieces of thoughts popping up in no pre-defined order; thoughts are not limited to the problem at hand, and can even be concerned with other related problems. By comparing diverse thoughts and chaining relevant pieces, humans are less prone to errors. In this paper, we take this inspiration and propose CANTOR, a numerical reasoner that models reasoning steps using a directed acyclic graph where we produce diverse reasoning steps simultaneously without pre-defined decoding dependencies, and compare and chain relevant ones to reach a solution. Extensive experiments demonstrated the effectiveness of CANTOR under both fully-supervised and weakly-supervised settings.
null
null
10.18653/v1/2022.findings-emnlp.187
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,699
inproceedings
katz-etal-2022-inferring
Inferring Implicit Relations in Complex Questions with Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.188/
Katz, Uri and Geva, Mor and Berant, Jonathan
Findings of the Association for Computational Linguistics: EMNLP 2022
2548--2566
A prominent challenge for modern language understanding systems is the ability to answer implicit reasoning questions, where the required reasoning steps for answering the question are not mentioned in the text explicitly. In this work, we investigate why current models struggle with implicit reasoning question answering (QA) tasks, by decoupling inference of reasoning steps from their execution.We define a new task of implicit relation inference and construct a benchmark, IMPLICITRELATIONS, where given a question, a model should output a list of concept-relation pairs, where the relations describe the implicit reasoning steps required for answering the question.Using IMPLICITRELATIONS, we evaluate models from the GPT-3 family and find that, while these models struggle on the implicit reasoning QA task, they often succeed at inferring implicit relations.This suggests that the challenge in implicit reasoning questions does not stem from the need to plan a reasoning strategy alone, but to do it while also retrieving and reasoning over relevant information.
null
null
10.18653/v1/2022.findings-emnlp.188
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,700
inproceedings
ye-etal-2022-eliciting
Eliciting and Understanding Cross-task Skills with Task-level Mixture-of-Experts
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.189/
Ye, Qinyuan and Zha, Juan and Ren, Xiang
Findings of the Association for Computational Linguistics: EMNLP 2022
2567--2592
Recent works suggest that transformer models are capable of multi-tasking on diverse NLP tasks and adapt to new tasks efficiently. However, the potential of these multi-task models may be limited as they use the same set of parameters for all tasks. In contrast, humans tackle tasks in a more flexible way, by making proper presumptions on what skills and knowledge are relevant and executing only the necessary computations. Inspired by this, we propose to use task-level mixture-of-expert models, which has a collection of transformer layers (i.e., experts) and a router component to choose among these experts dynamically and flexibly. We find that these models help improve the average performance gain (ARG) metric by 2.6{\%} when adapting to unseen tasks in few-shot settings, and by 5.6{\%} in zero-shot generalization settings. Further, we show that the learned routing decisions and experts partly rediscover human categorization of NLP tasks {--} certain experts are strongly associated with extractive tasks, some with classification tasks, and some with tasks requiring world knowledge.
null
null
10.18653/v1/2022.findings-emnlp.189
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,701
inproceedings
zhou-bollegala-2022-curious
On the Curious Case of l2 norm of Sense Embeddings
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.190/
Zhou, Yi and Bollegala, Danushka
Findings of the Association for Computational Linguistics: EMNLP 2022
2593--2602
We show that the l2 norm of a static sense embedding encodes information related to the frequency of that sense in the training corpus used to learn the sense embeddings. This finding can be seen as an extension of a previously known relationship for word embeddings to sense embeddings. Our experimental results show that in spite of its simplicity, the l2 norm of sense embeddings is a surprisingly effective feature for several word sense related tasks such as (a) most frequent sense prediction, (b) word-in-context (WiC), and (c) word sense disambiguation (WSD). In particular, by simply including the l2 norm of a sense embedding as a feature in a classifier, we show that we can improve WiC and WSD methods that use static sense embeddings.
null
null
10.18653/v1/2022.findings-emnlp.190
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,702
inproceedings
ardakani-2022-partially
Partially-Random Initialization: A Smoking Gun for Binarization Hypothesis of {BERT}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.191/
Ardakani, Arash
Findings of the Association for Computational Linguistics: EMNLP 2022
2603--2612
In the past few years, pre-trained BERT has become one of the most popular deep-learning language models due to their remarkable performance in natural language processing (NLP) tasks. However, the superior performance of BERT comes at the cost of high computational and memory complexity, hindering its envisioned widespread deployment in edge devices with limited computing resources. Binarization can alleviate these limitations by reducing storage requirements and improving computing performance. However, obtaining a comparable accuracy performance for binary BERT w.r.t. its full-precision counterpart is still a difficult task. We observe that direct binarization of pre-trained BERT provides a poor initialization during the fine-tuning phase, making the model incapable of achieving a decent accuracy on downstream tasks. Based on this observation, we put forward the following \textit{hypothesis}: partially randomly-initialized BERT with binary weights and activations can reach to a decent accuracy performance by distilling knowledge from the its full-precision counterpart. We show that BERT with pre-trained embedding layer and randomly-initialized encoder is a smoking gun for this hypothesis. We identify the smoking gun through a series of experiments and show that it yields a new set of state-of-the-art results on the GLUE and SQuAD benchmarks.
null
null
10.18653/v1/2022.findings-emnlp.191
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,703
inproceedings
zhou-etal-2022-prompt
Prompt Consistency for Zero-Shot Task Generalization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.192/
Zhou, Chunting and He, Junxian and Ma, Xuezhe and Berg-Kirkpatrick, Taylor and Neubig, Graham
Findings of the Association for Computational Linguistics: EMNLP 2022
2613--2626
One of the most impressive results of recent NLP history is the ability of pre-trained language models to solve new tasks in a zero-shot setting. To achieve this, NLP tasks are framed as natural language prompts, generating a response indicating the predicted output. Nonetheless, the performance in such settings often lags far behind its supervised counterpart, suggesting a large space for potential improvement. In this paper, we explore methods to utilize unlabeled data to improve zero-shot performance. Specifically, we take advantage of the fact that multiple prompts can be used to specify a single task, and propose to regularize prompt consistency, encouraging consistent predictions over this diverse set of prompts. Our method makes it possible to fine-tune the model either with extra unlabeled training data, or directly on test input at inference time in an unsupervised manner. In experiments, our approach outperforms the state-of-the-art zero-shot learner, T0, on 9 out of 11 datasets across 4 NLP tasks by up to 10.6 absolute points in terms of accuracy. The gains are often attained with a small number of unlabeled examples.
null
null
10.18653/v1/2022.findings-emnlp.192
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,704
inproceedings
hu-etal-2022-context
In-Context Learning for Few-Shot Dialogue State Tracking
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.193/
Hu, Yushi and Lee, Chia-Hsuan and Xie, Tianbao and Yu, Tao and Smith, Noah A. and Ostendorf, Mari
Findings of the Association for Computational Linguistics: EMNLP 2022
2627--2643
Collecting and annotating task-oriented dialogues is time-consuming and costly. Thus, zero and few shot learning for dialogue tasks presents an exciting opportunity. In this work, we propose an in-context (IC) learning framework for zero-shot and few-shot learning dialogue state tracking (DST), where a large pretrained language model (LM) takes a test instance and a few exemplars as input, and directly decodes the dialogue state without any parameter updates. This approach is more flexible and scalable than prior DST work when adapting to new domains and scenarios. To better leverage a tabular domain description in the LM prompt, we reformulate DST into a text-to-SQL problem. We also propose a novel approach to retrieve annotated dialogues as exemplars. Empirical results on MultiWOZ show that our method IC-DST substantially outperforms previous fine-tuned state-of-the-art models in few-shot settings. In addition, we test IC-DST in zero-shot settings, in which the model only takes a fixed task instruction as input, finding that it outperforms previous zero-shot methods by a large margin.
null
null
10.18653/v1/2022.findings-emnlp.193
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,705
inproceedings
palaskar-etal-2022-advances
On Advances in Text Generation from Images Beyond Captioning: A Case Study in Self-Rationalization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.194/
Palaskar, Shruti and Bhagia, Akshita and Bisk, Yonatan and Metze, Florian and Black, Alan W and Marasovic, Ana
Findings of the Association for Computational Linguistics: EMNLP 2022
2644--2657
Combining the visual modality with pretrained language models has been surprisingly effective for simple descriptive tasks such as image captioning. More general text generation however remains elusive. We take a step back and ask: How do these models work for more complex generative tasks, i.e. conditioning on both text and images? Are multimodal models simply visually adapted language models, or do they combine they reason jointly over modalities?We investigate these questions in the context of self-rationalization (jointly generating task labels/answers and free-text explanations) of three tasks: (i) visual question answering in VQA-X, (ii) visual commonsense reasoning in VCR, and (iii) visual-textual entailment in E-SNLI-VE. We show that recent unimodal advances, CLIP image representations and scaling of language models, do not consistently improveself-rationalization in multimodal tasks. We find that no single model type works universally best across tasks, datasets, and finetuning data sizes. Our findings motivate the need for novel general backbones that move text generation from images and text beyond image captioning.
null
null
10.18653/v1/2022.findings-emnlp.194
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,706
inproceedings
pramanick-etal-2022-challenges
The challenges of temporal alignment on {T}witter during crises
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.195/
Pramanick, Aniket and Beck, Tilman and Stowe, Kevin and Gurevych, Iryna
Findings of the Association for Computational Linguistics: EMNLP 2022
2658--2672
Language use changes over time, and this impacts the effectiveness of NLP systems. This phenomenon is even more prevalent in social media data during crisis events where meaning and frequency of word usage may change over the course of days. Contextual language models fail to adapt temporally, emphasizing the need for temporal adaptation in models which need to be deployed over an extended period of time. While existing approaches consider data spanning large periods of time (from years to decades), shorter time spans are critical for crisis data. We quantify temporal degradation for this scenario and propose methods to cope with performance loss by leveraging techniques from domain adaptation. To the best of our knowledge, this is the first effort to explore effects of rapid language change driven by adversarial adaptations, particularly during natural and human-induced disasters. Through extensive experimentation on diverse crisis datasets, we analyze under what conditions our approaches outperform strong baselines while highlighting the current limitations of temporal adaptation methods in scenarios where access to unlabeled data is scarce.
null
null
10.18653/v1/2022.findings-emnlp.195
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,707
inproceedings
ulmer-etal-2022-experimental
Experimental Standards for Deep Learning in Natural Language Processing Research
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.196/
Ulmer, Dennis and Bassignana, Elisa and M{\"uller-Eberstein, Max and Varab, Daniel and Zhang, Mike and van der Goot, Rob and Hardmeier, Christian and Plank, Barbara
Findings of the Association for Computational Linguistics: EMNLP 2022
2673--2692
The field of Deep Learning (DL) has undergone explosive growth during the last decade, with a substantial impact on Natural Language Processing (NLP) as well. Yet, compared to more established disciplines, a lack of common experimental standards remains an open challenge to the field at large. Starting from fundamental scientific principles, we distill ongoing discussions on experimental standards in NLP into a single, widely-applicable methodology. Following these best practices is crucial to strengthen experimental evidence, improve reproducibility and enable scientific progress. These standards are further collected in a public repository to help them transparently adapt to future needs.
null
null
10.18653/v1/2022.findings-emnlp.196
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,708
inproceedings
le-etal-2022-shot
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of In-Context Experts
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.197/
Le, Nghia T. and Bai, Fan and Ritter, Alan
Findings of the Association for Computational Linguistics: EMNLP 2022
2693--2706
Anaphora resolution is an important task for information extraction across a range of languages, text genres, and domains, motivating the need for methods that do not require large annotated datasets. In-context learning has emerged as a promising approach, yet there are a number of challenges in applying in-context learning to resolve anaphora. For example, encoding a single in-context demonstration that consists of: an anaphor, a paragraph-length context, and a list of corresponding antecedents, requires conditioning a language model on a long sequence of tokens, limiting the number of demonstrations per prompt.In this paper, we present Mice (Mixtures of In-Context Experts), which we demonstrate is effective for few-shot anaphora resolution in scientific protocols. Given only a handful of training examples, Mice combines the predictions of hundreds of in-context experts, yielding a 30{\%} increase in F1 score over a competitive prompt retrieval baseline. Furthermore, we show Mice can be used to train compact student models without sacrificing performance. As far as we are aware, this is the first work to present experimental results demonstrating the effectiveness of in-context learning on the task of few-shot anaphora resolution in scientific protocols.
null
null
10.18653/v1/2022.findings-emnlp.197
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,709
inproceedings
ulmer-etal-2022-exploring
Exploring Predictive Uncertainty and Calibration in {NLP}: A Study on the Impact of Method {\&} Data Scarcity
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.198/
Ulmer, Dennis and Frellsen, Jes and Hardmeier, Christian
Findings of the Association for Computational Linguistics: EMNLP 2022
2707--2735
We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages. By training models on sub-sampled datasets in three different languages, we assess the quality of estimates from a wide array of approaches and their dependence on the amount of available data. We find that while approaches based on pre-trained models and ensembles achieve the best results overall, the quality of uncertainty estimates can surprisingly suffer with more data. We also perform a qualitative analysis of uncertainties on sequences, discovering that a model`s total uncertainty seems to be influenced to a large degree by its data uncertainty, not model uncertainty. All model implementations are open-sourced in a software package.
null
null
10.18653/v1/2022.findings-emnlp.198
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,710
inproceedings
chi-etal-2022-conditional
Conditional Supervised Contrastive Learning for Fair Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.199/
Chi, Jianfeng and Shand, William and Yu, Yaodong and Chang, Kai-Wei and Zhao, Han and Tian, Yuan
Findings of the Association for Computational Linguistics: EMNLP 2022
2736--2756
Contrastive representation learning has gained much attention due to its superior performance in learning representations from both image and sequential data. However, the learned representations could potentially lead to performance disparities in downstream tasks, such as increased silencing of underrepresented groups in toxicity comment classification. In light of this challenge, in this work, we study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning. Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives, and then propose to use conditional supervised contrastive objectives to learn fair representations for text classification. We conduct experiments on two text datasets to demonstrate the effectiveness of our approaches in balancing the trade-offs between task performance and bias mitigation among existing baselines for text classification. Furthermore, we also show that the proposed methods are stable in different hyperparameter settings.
null
null
10.18653/v1/2022.findings-emnlp.199
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,711
inproceedings
li-etal-2022-spabert
{S}pa{BERT}: A Pretrained Language Model from Geographic Data for Geo-Entity Representation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.200/
Li, Zekun and Kim, Jina and Chiang, Yao-Yi and Chen, Muhao
Findings of the Association for Computational Linguistics: EMNLP 2022
2757--2769
Named geographic entities (geo-entities for short) are the building blocks of many geographic datasets. Characterizing geo-entities is integral to various application domains, such as geo-intelligence and map comprehension, while a key challenge is to capture the spatial-varying context of an entity. We hypothesize that we shall know the characteristics of a geo-entity by its surrounding entities, similar to knowing word meanings by their linguistic context. Accordingly, we propose a novel spatial language model, SpaBERT, which provides a general-purpose geo-entity representation based on neighboring entities in geospatial data. SpaBERT extends BERT to capture linearized spatial context, while incorporating a spatial coordinate embedding mechanism to preserve spatial relations of entities in the 2-dimensional space. SpaBERT is pretrained with masked language modeling and masked entity prediction tasks to learn spatial dependencies. We apply SpaBERT to two downstream tasks: geo-entity typing and geo-entity linking. Compared with the existing language models that do not use spatial context, SpaBERT shows significant performance improvement on both tasks. We also analyze the entity representation from SpaBERT in various settings and the effect of spatial coordinate embedding.
null
null
10.18653/v1/2022.findings-emnlp.200
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,712
inproceedings
du-etal-2022-self
Self-training with Two-phase Self-augmentation for Few-shot Dialogue Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.201/
Du, Wanyu and Chen, Hanjie and Ji, Yangfeng
Findings of the Association for Computational Linguistics: EMNLP 2022
2770--2784
In task-oriented dialogue systems, response generation from meaning representations (MRs) often suffers from limited training examples, due to the high cost of annotating MR-to-Text pairs. Previous works on self-training leverage fine-tuned conversational models to automatically generate pseudo-labeled MR-to-Text pairs for further fine-tuning. However, some self-augmented data may be noisy or uninformative for the model to learn from. In this work, we propose a two-phase self-augmentation procedure to generate high-quality pseudo-labeled MR-to-Text pairs: the first phase selects the most informative MRs based on model`s prediction uncertainty; with the selected MRs, the second phase generates accurate responses by aggregating multiple perturbed latent representations from each MR. Empirical experiments on two benchmark datasets, FewShotWOZ and FewShotSGD, show that our method generally outperforms existing self-training methods on both automatic and human evaluations.
null
null
10.18653/v1/2022.findings-emnlp.201
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,713
inproceedings
aufrant-2022-nlp
Is {NLP} Ready for Standardization?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.202/
Aufrant, Lauriane
Findings of the Association for Computational Linguistics: EMNLP 2022
2785--2800
While standardization is a well-established activity in other scientific fields such as telecommunications, networks or multimedia, in the field of AI and more specifically NLP it is still at its dawn. In this paper, we explore how various aspects of NLP (evaluation, data, tasks...) lack standards and how that can impact science, but also the society, the industry, and regulations. We argue that the numerous initiatives to rationalize the field and establish good practices are only the first step, and developing formal standards remains needed to bring further clarity to NLP research and industry, at a time where this community faces various crises regarding ethics or reproducibility. We thus encourage NLP researchers to contribute to existing and upcoming standardization projects, so that they can express their needs and concerns, while sharing their expertise.
null
null
10.18653/v1/2022.findings-emnlp.202
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,714
inproceedings
eisape-etal-2022-probing
Probing for Incremental Parse States in Autoregressive Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.203/
Eisape, Tiwalayo and Gangireddy, Vineet and Levy, Roger and Kim, Yoon
Findings of the Association for Computational Linguistics: EMNLP 2022
2801--2813
Next-word predictions from autoregressive neural language models show remarkable sensitivity to syntax. This work evaluates the extent to which this behavior arises as a result of a learned ability to maintain implicit representations of incremental syntactic structures. We extend work in syntactic probing to the incremental setting and present several probes for extracting incomplete syntactic structure (operationalized through parse states from a stack-based parser) from autoregressive language models. We find that our probes can be used to predict model preferences on ambiguous sentence prefixes and causally intervene on model representations and steer model behavior. This suggests implicit incremental syntactic inferences underlie next-word predictions in autoregressive neural language models.
null
null
10.18653/v1/2022.findings-emnlp.203
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,715
inproceedings
si-etal-2022-examining
Re-Examining Calibration: The Case of Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.204/
Si, Chenglei and Zhao, Chen and Min, Sewon and Boyd-Graber, Jordan
Findings of the Association for Computational Linguistics: EMNLP 2022
2814--2829
For users to trust model predictions, they need to understand model outputs, particularly their confidence {---} calibration aims to adjust (calibrate) models' confidence to match expected accuracy. We argue that the traditional calibration evaluation does not promote effective calibrations: for example, it can encourage always assigning a mediocre confidence score to all predictions, which does not help users distinguish correct predictions from wrong ones. Building on those observations, we propose a new calibration metric, MacroCE, that better captures whether the model assigns low confidence to wrong predictions and high confidence to correct predictions. Focusing on the practical application of open-domain question answering, we examine conventional calibration methods applied on the widely-used retriever-reader pipeline, all of which do not bring significant gains under our new MacroCE metric. Toward better calibration, we propose a new calibration method (ConsCal) that uses not just final model predictions but whether multiple model checkpoints make consistent predictions. Altogether, we provide an alternative view of calibration along with a new metric, re-evaluation of existing calibration methods on our metric, and proposal of a more effective calibration method.
null
null
10.18653/v1/2022.findings-emnlp.204
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,716
inproceedings
mackenzie-etal-2022-accelerating
Accelerating Learned Sparse Indexes Via Term Impact Decomposition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.205/
Mackenzie, Joel and Mallia, Antonio and Moffat, Alistair and Petri, Matthias
Findings of the Association for Computational Linguistics: EMNLP 2022
2830--2842
Novel inverted index-based learned sparse ranking models provide more effective, but less efficient, retrieval performance compared to traditional ranking models like BM25. In this paper, we introduce a technique we call postings clipping to improve the query efficiency of learned representations. Our technique amplifies the benefit of dynamic pruning query processing techniques by accounting for changes in term importance distributions of learned ranking models. The new clipping mechanism accelerates top-k retrieval by up to 9.6X without any loss in effectiveness.
null
null
10.18653/v1/2022.findings-emnlp.205
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,717
inproceedings
mueller-etal-2022-text
Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.206/
Mueller, David and Andrews, Nicholas and Dredze, Mark
Findings of the Association for Computational Linguistics: EMNLP 2022
2843--2858
Traditional multi-task learning architectures learn a single model across multiple tasks through a shared encoder followed by task-specific decoders. Learning these models often requires specialized training algorithms that address task-conflict in the shared parameter updates, which otherwise can lead to negative transfer. A new type of multi-task learning within NLP homogenizes multi-task architectures as a shared encoder and language model decoder, which does surprisingly well across a range of diverse tasks. Does this new architecture suffer from task-conflicts that require specialized training algorithms? We study how certain factors in the shift towards text-to-text models affects multi-task conflict and negative transfer, finding that both directional conflict and transfer are surprisingly constant across architectures.
null
null
10.18653/v1/2022.findings-emnlp.206
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,718
inproceedings
godey-etal-2022-manta
{MANT}a: Efficient Gradient-Based Tokenization for End-to-End Robust Language Modeling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.207/
Godey, Nathan and Castagn{\'e}, Roman and de la Clergerie, {\'E}ric and Sagot, Beno{\^i}t
Findings of the Association for Computational Linguistics: EMNLP 2022
2859--2870
Static subword tokenization algorithms have been an essential component of recent works on language modeling. However, their static nature results in important flaws that degrade the models' downstream performance and robustness. In this work, we propose MANTa, a Module for Adaptive Neural TokenizAtion. MANTa is a differentiable tokenizer trained end-to-end with the language model. The resulting system offers a trade-off between the expressiveness of byte-level models and the speed of models trained using subword tokenization. In addition, our tokenizer is highly explainable since it produces an explicit segmentation of sequences into blocks. We evaluate our pre-trained model on several English datasets from different domains as well as on synthetic noise. We find that MANTa improves robustness to character perturbations and out-of-domain data. We then show that MANTa performs comparably to other models on the general-domain GLUE benchmark. Finally, we show that it is considerably faster than strictly byte-level models.
null
null
10.18653/v1/2022.findings-emnlp.207
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,719
inproceedings
aich-etal-2022-towards
Towards Intelligent Clinically-Informed Language Analyses of People with Bipolar Disorder and Schizophrenia
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.208/
Aich, Ankit and Quynh, Avery and Badal, Varsha and Pinkham, Amy and Harvey, Philip and Depp, Colin and Parde, Natalie
Findings of the Association for Computational Linguistics: EMNLP 2022
2871--2887
NLP offers a myriad of opportunities to support mental health research. However, prior work has almost exclusively focused on social media data, for which diagnoses are difficult or impossible to validate. We present a first-of-its-kind dataset of manually transcribed interactions with people clinically diagnosed with bipolar disorder and schizophrenia, as well as healthy controls. Data was collected through validated clinical tasks and paired with diagnostic measures. We extract 100+ temporal, sentiment, psycholinguistic, emotion, and lexical features from the data and establish classification validity using a variety of models to study language differences between diagnostic groups. Our models achieve strong classification performance (maximum F1=0.93-0.96), and lead to the discovery of interesting associations between linguistic features and diagnostic class. It is our hope that this dataset will offer high value to clinical and NLP researchers, with potential for widespread broader impacts.
null
null
10.18653/v1/2022.findings-emnlp.208
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,720
inproceedings
xie-etal-2022-calibrating
Calibrating Trust of Multi-Hop Question Answering Systems with Decompositional Probes
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.209/
Xie, Kaige and Wiegreffe, Sarah and Riedl, Mark
Findings of the Association for Computational Linguistics: EMNLP 2022
2888--2902
Multi-hop Question Answering (QA) is a challenging task since it requires an accurate aggregation of information from multiple context paragraphs and a thorough understanding of the underlying reasoning chains. Recent work in multi-hop QA has shown that performance can be boosted by first decomposing the questions into simpler, single-hop questions. In this paper, we explore one additional utility of the multi-hop decomposition from the perspective of explainable NLP: to create explanation by probing a neural QA model with them. We hypothesize that in doing so, users will be better able to predict when the underlying QA system will give the correct answer. Through human participant studies, we verify that exposing the decomposition probes and answers to the probes to users can increase their ability to predict system performance on a question instance basis. We show that decomposition is an effective form of probing QA systems as well as a promising approach to explanation generation. In-depth analyses show the need for improvements in decomposition systems.
null
null
10.18653/v1/2022.findings-emnlp.209
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,721
inproceedings
nguyen-son-etal-2022-checkhard
{C}heck{HARD}: Checking Hard Labels for Adversarial Text Detection, Prediction Correction, and Perturbed Word Suggestion
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.210/
Nguyen-Son, Hoang-Quoc and Ung, Huy Quang and Hidano, Seira and Fukushima, Kazuhide and Kiyomoto, Shinsaku
Findings of the Association for Computational Linguistics: EMNLP 2022
2903--2913
An adversarial attack generates harmful text that fools a target model. More dangerously, this text is unrecognizable by humans. Existing work detects adversarial text and corrects a target`s prediction by identifying perturbed words and changing them into their synonyms, but many benign words are also changed. In this paper, we directly detect adversarial text, correct the prediction, and suggest perturbed words by checking the change in the hard labels from the target`s predictions after replacing a word with its transformation using a model that we call CheckHARD. The experiments demonstrate that CheckHARD outperforms existing work on various attacks, models, and datasets.
null
null
10.18653/v1/2022.findings-emnlp.210
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,722
inproceedings
mei-etal-2022-mitigating
Mitigating Covertly Unsafe Text within Natural Language Systems
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.211/
Mei, Alex and Kabir, Anisha and Levy, Sharon and Subbiah, Melanie and Allaway, Emily and Judge, John and Patton, Desmond and Bimber, Bruce and McKeown, Kathleen and Wang, William Yang
Findings of the Association for Computational Linguistics: EMNLP 2022
2914--2926
An increasingly prevalent problem for intelligent technologies is text safety, as uncontrolled systems may generate recommendations to their users that lead to injury or life-threatening consequences. However, the degree of explicitness of a generated statement that can cause physical harm varies. In this paper, we distinguish types of text that can lead to physical harm and establish one particularly underexplored category: covertly unsafe text. Then, we further break down this category with respect to the system`s information and discuss solutions to mitigate the generation of text in each of these subcategories. Ultimately, our work defines the problem of covertly unsafe language that causes physical harm and argues that this subtle yet dangerous issue needs to be prioritized by stakeholders and regulators. We highlight mitigation strategies to inspire future researchers to tackle this challenging problem and help improve safety within smart systems.
null
null
10.18653/v1/2022.findings-emnlp.211
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,723
inproceedings
shang-etal-2022-know
{\textquotedblleft}{I} Know Who You Are{\textquotedblright}: Character-Based Features for Conversational Humor Recognition in {C}hinese
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.212/
Shang, Wenbo and Zhao, Jiangjiang and Wang, Zezhong and Li, Binyang and Yang, Fangchun and Wong, Kam-Fai
Findings of the Association for Computational Linguistics: EMNLP 2022
2927--2932
Humor plays an important role in our daily life, as it is an essential and fascinating element in the communication between persons. Therefore, how to recognize punchlines from the dialogue, i.e. conversational humor recognition, has attracted much interest of computational linguistics communities. However, most existing work attempted to understand the conversational humor by analyzing the contextual information of the dialogue, but neglected the character of the interlocutor, such as age, gender, occupation, and so on. For instance, the same utterance could bring out humorous from a serious person, but may be a plain expression from a naive person. To this end, this paper proposes a Character Fusion Conversational Humor Recognition model (CFCHR) to explore character information to recognize conversational humor. CFCHR utilizes a multi-task learning framework that unifies two highly pertinent tasks, i.e., character extraction and punchline identification. Based on deep neural networks, we trained both tasks jointly by sharing weight to extract the common and task-invariant features while each task could still learn its task-specific features. Experiments were conducted on Chinese sitcoms corpus, which consisted of 12,677 utterances from 22 characters. The experimental results demonstrated that CFCHR could achieve 33.08{\%} improvements in terms of F1-score over some strong baselines, and proved the effectiveness of the character information to identify the punchlines.
null
null
10.18653/v1/2022.findings-emnlp.212
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,724
inproceedings
wu-etal-2022-debiasgan
{D}ebias{GAN}: Eliminating Position Bias in News Recommendation with Adversarial Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.213/
Wu, Chuhan and Wu, Fangzhao and He, Xiangnan and Huang, Yongfeng
Findings of the Association for Computational Linguistics: EMNLP 2022
2933--2938
Click behaviors are widely used for learning news recommendation models, but they are heavily affected by the biases brought by the news display positions. It is important to remove position biases to train unbiased recommendation model and capture unbiased user interest. In this paper, we propose a news recommendation method named DebiasGAN that can effectively alleviate position biases via adversarial learning. The core idea is modeling the personalized effect of position bias on click behaviors in a candidate-aware way, and learning debiased candidate-aware user embeddings from which the position information cannot be discriminated. More specifically, we use a bias-aware click model to capture the effect of position bias on click behaviors, and use a bias-invariant click model with random candidate positions to estimate the ideally unbiased click scores. We apply adversarial learning to the embeddings learned by the two models to help the bias-invariant click model capture debiased user interest. Experimental results on two real-world datasets show that DebiasGAN effectively improves news recommendation by eliminating position biases.
null
null
10.18653/v1/2022.findings-emnlp.213
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,725
inproceedings
hyun-etal-2022-generating
Generating Multiple-Length Summaries via Reinforcement Learning for Unsupervised Sentence Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.214/
Hyun, Dongmin and Wang, Xiting and Park, Chayoung and Xie, Xing and Yu, Hwanjo
Findings of the Association for Computational Linguistics: EMNLP 2022
2939--2951
Sentence summarization shortens given texts while maintaining core contents of the texts. Unsupervised approaches have been studied to summarize texts without ground-truth summaries. However, recent unsupervised models are extractive, which remove words from texts and thus they are less flexible than abstractive summarization. In this work, we devise an abstractive model based on reinforcement learning without ground-truth summaries. We formulate the unsupervised summarization based on the Markov decision process with rewards representing the summary quality. To further enhance the summary quality, we develop a multi-summary learning mechanism that generates multiple summaries with varying lengths for a given text, while making the summaries mutually enhance each other. Experimental results show that the proposed model substantially outperforms both abstractive and extractive models, yet frequently generating new words not contained in input texts.
null
null
10.18653/v1/2022.findings-emnlp.214
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,726
inproceedings
wang-etal-2022-multilingual
Multilingual Sentence Transformer as A Multilingual Word Aligner
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.215/
Wang, Weikang and Chen, Guanhua and Wang, Hanqing and Han, Yue and Chen, Yun
Findings of the Association for Computational Linguistics: EMNLP 2022
2952--2963
Multilingual pretrained language models (mPLMs) have shown their effectiveness in multilingual word alignment induction. However, these methods usually start from mBERT or XLM-R. In this paper, we investigate whether multilingual sentence Transformer LaBSE is a strong multilingual word aligner. This idea is non-trivial as LaBSE is trained to learn language-agnostic sentence-level embeddings, while the alignment extraction task requires the more fine-grained word-level embeddings to be language-agnostic. We demonstrate that the vanilla LaBSE outperforms other mPLMs currently used in the alignment task, and then propose to finetune LaBSE on parallel corpus for further improvement. Experiment results on seven language pairs show that our best aligner outperforms previous state-of-the-art models of all varieties. In addition, our aligner supports different language pairs in a single model, and even achieves new state-of-the-art on zero-shot language pairs that does not appear in the finetuning process.
null
null
10.18653/v1/2022.findings-emnlp.215
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,727
inproceedings
dixit-etal-2022-core
{CORE}: A Retrieve-then-Edit Framework for Counterfactual Data Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.216/
Dixit, Tanay and Paranjape, Bhargavi and Hajishirzi, Hannaneh and Zettlemoyer, Luke
Findings of the Association for Computational Linguistics: EMNLP 2022
2964--2984
Counterfactual data augmentation (CDA) {--} i.e., adding minimally perturbed inputs during training {--} helps reduce model reliance on spurious correlations and improves generalization to out-of-distribution (OOD) data. Prior work on generating counterfactuals only considered restricted classes of perturbations, limiting their effectiveness. We present Counterfactual Generation via Retrieval and Editing (CORE), a retrieval-augmented generation framework for creating diverse counterfactual perturbations for CDA. For each training example, CORE first performs a dense retrieval over a task-related unlabeled text corpus using a learned bi-encoder and extracts relevant counterfactual excerpts. CORE then incorporates these into prompts to a large language model with few-shot learning capabilities, for counterfactual editing. Conditioning language model edits on naturally occurring data results in more diverse perturbations. Experiments on natural language inference and sentiment analysis benchmarks show that CORE counterfactuals are more effective at improving generalization to OOD data compared to other DA approaches. We also show that the CORE retrieval framework can be used to encourage diversity in manually authored perturbations.
null
null
10.18653/v1/2022.findings-emnlp.216
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,728
inproceedings
huang-etal-2022-conversation
Conversation Disentanglement with Bi-Level Contrastive Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.217/
Huang, Chengyu and Zhang, Zheng and Fei, Hao and Liao, Lizi
Findings of the Association for Computational Linguistics: EMNLP 2022
2985--2996
Conversation disentanglement aims to group utterances into detached sessions, which is a fundamental task in processing multi-party conversations. Existing methods have two main drawbacks. First, they overemphasize pairwise utterance relations but pay inadequate attention to the utterance-to-context relation modeling. Second, huge amount of human annotated data is required for training, which is expensive to obtain in practice. To address these issues, we propose a general disentangle model based on bi-level contrastive learning. It brings closer utterances in the same session while encourages each utterance to be near its clustered session prototypes in the representation space. Unlike existing approaches, our disentangle model works in both supervised setting with labeled data and unsupervised setting when no such data is available. The proposed method achieves new state-of-the-art performance on both settings across several public datasets.
null
null
10.18653/v1/2022.findings-emnlp.217
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,729
inproceedings
drozdov-etal-2022-cant
You can`t pick your neighbors, or can you? When and How to Rely on Retrieval in the k{NN}-{LM}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.218/
Drozdov, Andrew and Wang, Shufan and Rahimi, Razieh and McCallum, Andrew and Zamani, Hamed and Iyyer, Mohit
Findings of the Association for Computational Linguistics: EMNLP 2022
2997--3007
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs. One such approach, the kNN-LM, interpolates any existing LM`s predictions with the output of a k-nearest neighbors model and requires no additional training. In this paper, we explore the importance of lexical and semantic matching in the context of items retrieved by kNN-LM. We find two trends: (1) the presence of large overlapping n-grams between the datastore and evaluation set plays an important factor in strong performance, even when the datastore is derived from the training data; and (2) the kNN-LM is most beneficial when retrieved items have high semantic similarity with the query. Based on our analysis, we define a new formulation of the kNN-LM that uses retrieval quality to assign the interpolation coefficient. We empirically measure the effectiveness of our approach on two English language modeling datasets, Wikitext-103 and PG-19. Our re-formulation of the kNN-LM is beneficial in both cases, and leads to nearly 4{\%} improvement in perplexity on the Wikitext-103 test set.
null
null
10.18653/v1/2022.findings-emnlp.218
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,730
inproceedings
jin-lee-2022-stubot
{S}tu{B}ot: Learning by Teaching a Conversational Agent Through Machine Reading Comprehension
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.219/
Jin, Nayoung and Lee, Hana
Findings of the Association for Computational Linguistics: EMNLP 2022
3008--3020
This paper proposes StuBot, a text-based conversational agent that provides adaptive feedback for learning by teaching. StuBot first asks the users to teach the learning content by summarizing and explaining it in their own words. After the users inputted the explanation text for teaching, StuBot uses a machine reading comprehension (MRC) engine to provide adaptive feedback with further questions about the insufficient parts of the explanation text. We conducted a within-subject study to evaluate the effectiveness of adaptive feedback by StuBot. Both the quantitative and qualitative results showed that learning by teaching with adaptive feedback can improve learning performance, immersion, and overall experience.
null
null
10.18653/v1/2022.findings-emnlp.219
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,731
inproceedings
jiang-etal-2022-improved
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.220/
Jiang, Yuxin and Zhang, Linhan and Wang, Wei
Findings of the Association for Computational Linguistics: EMNLP 2022
3021--3035
Contrastive learning has been demonstrated to be effective in enhancing pre-trained language models (PLMs) to derive superior universal sentence embeddings. However, existing contrastive methods still have two limitations. Firstly, previous works may acquire poor performance under domain shift settings, thus hindering the application of sentence representations in practice. We attribute this low performance to the over-parameterization of PLMs with millions of parameters. To alleviate it, we propose PromCSE (Prompt-based Contrastive Learning for Sentence Embeddings), which only trains small-scale Soft Prompt (i.e., a set of trainable vectors) while keeping PLMs fixed. Secondly, the commonly used NT-Xent loss function of contrastive learning does not fully exploit hard negatives in supervised learning settings. To this end, we propose to integrate an Energy-based Hinge loss to enhance the pairwise discriminative power, inspired by the connection between the NT-Xent loss and the Energy-based Learning paradigm. Empirical results on seven standard semantic textual similarity (STS) tasks and a domain-shifted STS task both show the effectiveness of our method compared with the current state-of-the-art sentence embedding models.
null
null
10.18653/v1/2022.findings-emnlp.220
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,732
inproceedings
wu-etal-2022-rap
{R}a{P}: Redundancy-aware Video-language Pre-training for Text-Video Retrieval
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.221/
Wu, Xing and Gao, Chaochen and Lin, Zijia and Wang, Zhongyuan and Han, Jizhong and Hu, Songlin
Findings of the Association for Computational Linguistics: EMNLP 2022
3036--3047
Video language pre-training methods have mainly adopted sparse sampling techniques to alleviate the temporal redundancy of videos. Though effective, sparse sampling still suffers inter-modal redundancy: visual redundancy and textual redundancy. Compared with highly generalized text, sparsely sampled frames usually contain text-independent portions, called visual redundancy. Sparse sampling is also likely to miss important frames corresponding to some text portions, resulting in textual redundancy. Inter-modal redundancy leads to a mismatch of video and text information, hindering the model from better learning the shared semantics across modalities. To alleviate it, we propose Redundancy-aware Video-language Pre-training. We design a redundancy measurement of video patches and text tokens by calculating the cross-modal minimum dis-similarity. Then, we penalize the high-redundant video patches and text tokens through a proposed redundancy-aware contrastive learning. We evaluate our method on four benchmark datasets, MSRVTT, MSVD, DiDeMo, and LSMDC, achieving a significant improvement over the previous state-of-the-art results.
null
null
10.18653/v1/2022.findings-emnlp.221
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,733
inproceedings
zhang-etal-2022-fcgcl
{FCGCL}: Fine- and Coarse-Granularity Contrastive Learning for Speech Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.222/
Zhang, Hao and Si, Nianwen and Chen, Yaqi and Li, Zhen and Niu, Tong and Yang, Xukui and Qu, Dan
Findings of the Association for Computational Linguistics: EMNLP 2022
3048--3059
It is notoriously difficult to implement end-to-end speech translation (E2E-ST) model because of the task complexity and data scarcity. Existing techniques often attempt to carry out implicit knowledge transfer from machine translation (MT) to ST model by imposing various constraints. However, in this transfer scenario, a significant problem is that the performance of the MT will drop significantly and the final transfer effect is also restricted. In this article, we recommend Fine and Coarse Granularity Contrastive Learning (FCGCL), which conduct explicit knowledge transfer from MT to ST model. Specially, we ensure through multi granularity contrastive learning that inputs with similar semantic between different modalities are encoded closely in the shared semantic space while inputs with different semantics are kept apart. Experiments on the MuST-C datasets on all 8 languages and further analysis show that our method can effectively improve the E2E-ST performance and achieves an average BLEU of 29.0.
null
null
10.18653/v1/2022.findings-emnlp.222
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,734
inproceedings
wu-etal-2022-infocse
{I}nfo{CSE}: Information-aggregated Contrastive Learning of Sentence Embeddings
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.223/
Wu, Xing and Gao, Chaochen and Lin, Zijia and Han, Jizhong and Wang, Zhongyuan and Hu, Songlin
Findings of the Association for Computational Linguistics: EMNLP 2022
3060--3070
Contrastive learning has been extensively studied in sentence embedding learning, which assumes that the embeddings of different views of the same sentence are closer. The constraint brought by this assumption is weak, and a good sentence representation should also be able to reconstruct the original sentence fragments. Therefore, this paper proposes an information-aggregated contrastive learning framework for learning unsupervised sentence embeddings, termed InfoCSE.InfoCSE forces the representation of [CLS] positions to aggregate denser sentence information by introducing an additional Masked language model task and a well-designed network. We evaluate the proposed InfoCSE on several benchmark datasets w.r.t the semantic text similarity (STS) task. Experimental results show that InfoCSE outperforms SimCSE by an average Spearman correlation of 2.60{\%} on BERT-base, and 1.77{\%} on BERT-large, achieving state-of-the-art results among unsupervised sentence representation learning methods.
null
null
10.18653/v1/2022.findings-emnlp.223
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,735
inproceedings
shen-etal-2022-benchmarking
Benchmarking Language Models for Code Syntax Understanding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.224/
Shen, Da and Chen, Xinyun and Wang, Chenguang and Sen, Koushik and Song, Dawn
Findings of the Association for Computational Linguistics: EMNLP 2022
3071--3093
Pre-trained language models have demonstrated impressive performance in both natural language processing and program understanding, which represent the input as a token sequence without explicitly modeling its structure. Some prior works show that pre-trained language models can capture the syntactic rules of natural languages without finetuning on syntax understanding tasks. However, there is limited understanding of how well pre-trained models understand the code structure so far. In this work, we perform the first thorough benchmarking of the state-of-the-art pre-trained models for identifying the syntactic structures of programs. Specifically, we introduce CodeSyntax, a large-scale dataset of programs annotated with the syntactic relationships in their corresponding abstract syntax trees. Our key observation is that pre-training on massive code data does not result in decent code syntax understanding. In fact, these pre-trained programming language models fail to match the performance of naive baselines based on positional offsets and keywords. We also present a natural language benchmark to highlight the differences between natural languages and programming languages in terms of understanding corresponding syntactic structures. Our findings point out key limitations of existing pre-training methods and suggest the importance of modeling syntactic structures for the programming language.
null
null
10.18653/v1/2022.findings-emnlp.224
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,736
inproceedings
wang-etal-2022-learning-quote
Learning When and What to Quote: A Quotation Recommender System with Mutual Promotion of Recommendation and Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.225/
Wang, Lingzhi and Zeng, Xingshan and Wong, Kam-Fai
Findings of the Association for Computational Linguistics: EMNLP 2022
3094--3105
This work extends the current quotation recommendation task to a more realistic quotation recommender system that learns to predict when to quote and what to quote jointly. The system consists of three modules (tasks), a prediction module to predict whether to quote given conversation contexts, a recommendation module to recommend suitable quotations and a generation module generating quotations or sentences in ordinary language to continue the conversation. We benchmark several competitive models for the two newly introduced tasks (i.e., when-to-quote and what-to-continue). For quotation recommendation, compared with previous work that is either generation-based or ranking-based recommendation, we propose a novel framework with mutual promotion of generation module and ranking-based recommendation module. Experiments show that our framework achieves significantly better performance than baselines on two datasets. Further experiments and analyses validate the effectiveness of the proposed mechanisms and get a better understanding of the quotation recommendation task.
null
null
10.18653/v1/2022.findings-emnlp.225
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,737
inproceedings
liu-etal-2022-think
Think Beyond Words: Exploring Context-Relevant Visual Commonsense for Diverse Dialogue Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.226/
Liu, Yiting and Li, Liang and Zhang, Beichen and Huang, Qingming
Findings of the Association for Computational Linguistics: EMNLP 2022
3106--3117
Commonsense knowledge has been widely considered for building intelligent open-domain dialogue agents, aiming to generate meaningful and diverse responses. Previous works in this field usually lack the ability to effectively obtain and utilize auxiliary commonsense from the external visual world. In this paper, we argue that exploiting logical information in images related to context can be effective to enrich and steer the generation process. In view of this, we propose VICTOR, a context-relevant VIsual Commonsense enhanced dialogue generaTOR for generating coherent and informative responses. To obtain the associated visual commonsense, we devise a novel approach that expands topic words on the knowledge graph and maps them into daily scenarios. During the generation, the model adopts multimodal fusion mechanism to integrate visual and textual information, and adaptively combine their decoding distributions for better response generation. The experimental results on two public datasets show that our proposed method outperforms the latest competitive methods in terms of coherence and diversity.
null
null
10.18653/v1/2022.findings-emnlp.226
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,738
inproceedings
kaneko-etal-2022-gender-bias
Gender Bias in Meta-Embeddings
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.227/
Kaneko, Masahiro and Bollegala, Danushka and Okazaki, Naoaki
Findings of the Association for Computational Linguistics: EMNLP 2022
3118--3133
Different methods have been proposed to develop meta-embeddings from a given set of source embeddings. However, the source embeddings can contain unfair gender-related biases, and how these influence the meta-embeddings has not been studied yet.We study the gender bias in meta-embeddings created under three different settings:(1) meta-embedding multiple sources without performing any debiasing (Multi-Source No-Debiasing),(2) meta-embedding multiple sources debiased by a single method (Multi-Source Single-Debiasing), and(3) meta-embedding a single source debiased by different methods (Single-Source Multi-Debiasing).Our experimental results show that meta-embedding amplifies the gender biases compared to input source embeddings.We find that debiasing not only the sources but also their meta-embedding is needed to mitigate those biases.Moreover, we propose a novel debiasing method based on meta-embedding learning where we use multiple debiasing methods on a single source embedding and then create a single unbiased meta-embedding.
null
null
10.18653/v1/2022.findings-emnlp.227
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,739
inproceedings
zhang-etal-2022-third
Third-Party Aligner for Neural Word Alignments
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.228/
Zhang, Jinpeng and Dong, Chuanqi and Duan, Xiangyu and Zhang, Yuqi and Zhang, Min
Findings of the Association for Computational Linguistics: EMNLP 2022
3134--3145
Word alignment is to find translationally equivalent words between source and target sentences. Previous work has demonstrated that self-training can achieve competitive word alignment results. In this paper, we propose to use word alignments generated by a third-party word aligner to supervise the neural word alignment training. Specifically, source word and target word of each word pair aligned by the third-party aligner are trained to be close neighbors to each other in the contextualized embedding space when fine-tuning a pre-trained cross-lingual language model. Experiments on the benchmarks of various language pairs show that our approach can surprisingly do self-correction over the third-party supervision by finding more accurate word alignments and deleting wrong word alignments, leading to better performance than various third-party word aligners, including the currently best one. When we integrate all supervisions from various third-party aligners, we achieve state-of-the-art word alignment performances, with averagely more than two points lower alignment error rates than the best third-party aligner.We released our code at https://github.com/sdongchuanqi/Third-Party-Supervised-Aligner.
null
null
10.18653/v1/2022.findings-emnlp.228
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,740
inproceedings
wang-etal-2022-qadialmoe
{Q}a{D}ial{M}o{E}: Question-answering Dialogue based Fact Verification with Mixture of Experts
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.229/
Wang, Longzheng and Zhang, Peng and Lu, Xiaoyu and Zhang, Lei and Yan, Chaoyang and Zhang, Chuang
Findings of the Association for Computational Linguistics: EMNLP 2022
3146--3159
Fact verification is an essential tool to mitigate the spread of false information online, which has gained a widespread attention recently. However, a fact verification in the question-answering dialogue is still underexplored. In this paper, we propose a neural network based approach called question-answering dialogue based fact verification with mixture of experts (QaDialMoE). It exploits questions and evidence effectively in the verification process and can significantly improve the performance of fact verification. Specifically, we exploit the mixture of experts to focus on various interactions among responses, questions and evidence. A manager with an attention guidance module is implemented to guide the training of experts and assign a reasonable attention score to each expert. A prompt module is developed to generate synthetic questions that make our approach more generalizable. Finally, we evaluate the QaDialMoE and conduct a comparative study on three benchmark datasets. The experimental results demonstrate that our QaDialMoE outperforms previous approaches by a large margin and achieves new state-of-the-art results on all benchmarks. This includes the accuracy improvements on the HEALTHVER as 84.26{\%}, the FAVIQ A dev set as 78.7{\%}, the FAVIQ R dev set as 86.1{\%}, test set as 86.0{\%}, and the COLLOQUIAL as 89.5{\%}. To our best knowledge, this is the first work to investigate a question-answering dialogue based fact verification, and achieves new state-of-the-art results on various benchmark datasets.
null
null
10.18653/v1/2022.findings-emnlp.229
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,741
inproceedings
dongjie-huang-2022-multimodal
Multimodal Knowledge Learning for Named Entity Disambiguation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.230/
Dongjie, Zhang and Huang, Longtao
Findings of the Association for Computational Linguistics: EMNLP 2022
3160--3169
With the popularity of online social media, massive-scale multimodal information has brought new challenges to traditional Named Entity Disambiguation (NED) tasks. Recently, Multimodal Named Entity Disambiguation (MNED) has been proposed to link ambiguous mentions with the textual and visual contexts to a predefined knowledge graph. Existing attempts usually perform MNED by annotating multimodal mentions and adding multimodal features to traditional NED models. However, these studies may suffer from 1) failing to model multimodal information at the knowledge level, and 2) lacking multimodal annotation data against the large-scale unlabeled corpus. In this paper, we explore a pioneer study on leveraging multimodal knowledge learning to address the MNED task. Specifically, we first harvest multimodal knowledge in the Meta-Learning way, which is much easier than collecting ambiguous mention corpus. Then we design a knowledge-guided transfer learning strategy to extract unified representation from different modalities. Finally, we propose an Interactive Multimodal Learning Network (IMN) to fully utilize the multimodal information on both the mention and knowledge sides. Extensive experiments conducted on two public MNED datasets demonstrate that the proposed method achieves improvements over the state-of-the-art multimodal methods.
null
null
10.18653/v1/2022.findings-emnlp.230
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,742
inproceedings
han-etal-2022-generative
Generative Prompt Tuning for Relation Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.231/
Han, Jiale and Zhao, Shuai and Cheng, Bo and Ma, Shengkun and Lu, Wei
Findings of the Association for Computational Linguistics: EMNLP 2022
3170--3185
Using prompts to explore the knowledge contained within pre-trained language models for downstream tasks has now become an active topic. Current prompt tuning methods mostly convert the downstream tasks to masked language modeling problems by adding cloze-style phrases and mapping all labels to verbalizations with fixed length, which has proven effective for tasks with simple label spaces. However, when applied to relation classification exhibiting complex label spaces, vanilla prompt tuning methods may struggle with label verbalizations with arbitrary lengths due to rigid prompt restrictions. Inspired by the text infilling task for pre-training generative models that can flexibly predict missing spans, we propose a novel generative prompt tuning method to reformulate relation classification as an infilling problem, which frees our approach from limitations of current prompt based approaches and thus fully exploits rich semantics of entity and relation types. In addition, we design entity-guided decoding and discriminative relation scoring to generate and align relations effectively and efficiently during inference. Extensive experiments under fully supervised settings and low-resource settings demonstrate the effectiveness of our approach.
null
null
10.18653/v1/2022.findings-emnlp.231
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,743
inproceedings
wang-etal-2022-formulating
Formulating Few-shot Fine-tuning Towards Language Model Pre-training: A Pilot Study on Named Entity Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.232/
Wang, Zihan and Zhao, Kewen and Wang, Zilong and Shang, Jingbo
Findings of the Association for Computational Linguistics: EMNLP 2022
3186--3199
Fine-tuning pre-trained language models is a common practice in building NLP models for various tasks, including the case with less supervision. We argue that under the few-shot setting, formulating fine-tuning closer to the pre-training objective shall be able to unleash more benefits from the pre-trained language models. In this work, we take few-shot named entity recognition (NER) for a pilot study, where existing fine-tuning strategies are much different from pre-training. We propose a novel few-shot fine-tuning framework for NER, FFF-NER. Specifically, we introduce three new types of tokens, {\textquotedblleft}is-entity{\textquotedblright}, {\textquotedblleft}which-type{\textquotedblright} and {\textquotedblleft}bracket{\textquotedblright}, so we can formulate the NER fine-tuning as (masked) token prediction or generation, depending on the choice of the pre-training objective. In our experiments, we apply to fine-tune both BERT and BART for few-shot NER on several benchmark datasets and observe significant improvements over existing fine-tuning strategies, including sequence labeling, prototype meta-learning, and prompt-based approaches. We further perform a series of ablation studies, showing few-shot NER performance is strongly correlated with the similarity between fine-tuning and pre-training.
null
null
10.18653/v1/2022.findings-emnlp.232
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,744
inproceedings
luo-etal-2022-masked
Masked Language Models Know Which are Popular: A Simple Ranking Strategy for Commonsense Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.233/
Luo, Xuan and Fan, Chuang and Zhang, Yice and Jiang, Wanguo and Qin, Bing and Xu, Ruifeng
Findings of the Association for Computational Linguistics: EMNLP 2022
3200--3213
We propose a simple ranking strategy to solve a generative commonsense question answering (QA) problem. Compared with multiple-choice QA, it is challenging because the answers to a question are not unique and they are supposed to be popular and diverse. Our strategy exploits the dataset itself and negative samples that we collect from WordNet to train a ranker that picks out the most popular answers for commonsense questions. The effectiveness of our strategy is verified on different pre-trained masked language models (MLMs) in a pipeline framework, where an MLM reranks the generated answers. Further, we explore an end-to-end framework where MLMs are utilized to guide the generation of generative language models (GLMs). Taking advantage of reinforcement learning, we apply policy gradient to train a GLM with the rewards fed back by an MLM. Empirical results on ProtoQA dataset demonstrate that MLMs can acquire the ability to distinguish the popular answers and improve the typical answer generation of GLMs as well.
null
null
10.18653/v1/2022.findings-emnlp.233
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,745
inproceedings
meng-etal-2022-dialogusr
{D}ialog{USR}: Complex Dialogue Utterance Splitting and Reformulation for Multiple Intent Detection
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.234/
Meng, Haoran and Xin, Zheng and Liu, Tianyu and Wang, Zizhen and Feng, He and Lin, Binghuai and Zhao, Xuemin and Cao, Yunbo and Sui, Zhifang
Findings of the Association for Computational Linguistics: EMNLP 2022
3214--3229
While interacting with chatbots, users may elicit multiple intents in a single dialogue utterance. Instead of training a dedicated multi-intent detection model, we propose DialogUSR, a dialogue utterance splitting and reformulation task that first splits multi-intent user query into several single-intent sub-queries and then recovers all the coreferred and omitted information in the sub-queries. DialogUSR can serve as a plug-in and domain-agnostic module that empowers the multi-intent detection for the deployed chatbots with minimal efforts. We collect a high-quality naturally occurring dataset that covers 23 domains with a multi-step crowd-souring procedure. To benchmark the proposed dataset, we propose multiple action-based generative models that involve end-to-end and two-stage training, and conduct in-depth analyses on the pros and cons of the proposed baselines.
null
null
10.18653/v1/2022.findings-emnlp.234
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,746
inproceedings
maekawa-etal-2022-low
Low-resource Interactive Active Labeling for Fine-tuning Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.235/
Maekawa, Seiji and Zhang, Dan and Kim, Hannah and Rahman, Sajjadur and Hruschka, Estevam
Findings of the Association for Computational Linguistics: EMNLP 2022
3230--3242
Recently, active learning (AL) methods have been used to effectively fine-tune pre-trained language models for various NLP tasks such as sentiment analysis and document classification. However, given the task of fine-tuning language models, understanding the impact of different aspects on AL methods such as labeling cost, sample acquisition latency, and the diversity of the datasets necessitates a deeper investigation. This paper examines the performance of existing AL methods within a low-resource, interactive labeling setting. We observe that existing methods often underperform in such a setting while exhibiting higher latency and a lack of generalizability. To overcome these challenges, we propose a novel active learning method TYROUGE that employs a hybrid sampling strategy to minimize labeling cost and acquisition latency while providing a framework for adapting to dataset diversity via user guidance. Through our experiments, we observe that compared to SOTA methods, TYROUGE reduces the labeling cost by up to 43{\%} and the acquisition latency by as much as 11X, while achieving comparable accuracy. Finally, we discuss the strengths and weaknesses of TYROUGE by exploring the impact of dataset characteristics.
null
null
10.18653/v1/2022.findings-emnlp.235
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,747
inproceedings
wang-etal-2022-getting
Getting the Most out of Simile Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.236/
Wang, Xiaoyue and Song, Linfeng and Liu, Xin and Zhou, Chulun and Zeng, Hualin and Su, Jinsong
Findings of the Association for Computational Linguistics: EMNLP 2022
3243--3252
Simile recognition involves two subtasks: simile sentence classification that discriminates whether a sentence contains simile, and simile component extraction that locates the corresponding objects (i.e., tenors and vehicles).Recent work ignores features other than surface strings and suffers from the data hunger issue.We explore expressive features for this task to help achieve more effective data utilization.In particular, we study two types of features: 1) input-side features that include POS tags, dependency trees and word definitions, and 2) decoding features that capture the interdependence among various decoding decisions.We further construct a model named HGSR, which merges the input-side features as a heterogeneous graph and leverages decoding features via distillation.Experiments show that HGSR significantly outperforms the current state-of-the-art systems and carefully designed baselines, verifying the effectiveness of introduced features. We will release our code upon paper acceptance.
null
null
10.18653/v1/2022.findings-emnlp.236
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,748
inproceedings
tian-etal-2022-unified
A Unified Framework for Pun Generation with Humor Principles
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.237/
Tian, Yufei and Sheth, Divyanshu and Peng, Nanyun
Findings of the Association for Computational Linguistics: EMNLP 2022
3253--3261
We propose a unified framework to generate both homophonic and homographic puns to resolve the split-up in existing works. Specifically, we incorporate three linguistic attributes of puns to the language models: ambiguity, distinctiveness, and surprise. Our framework consists of three parts: 1) a context words/phrases selector to promote the aforementioned attributes, 2) a generation model trained on non-pun sentences to incorporate the context words/phrases into the generation output, and 3) a label predictor that learns the structure of puns which is used to steer the generation model at inference time. Evaluation results on both pun types demonstrate the efficacy of our model over strong baselines.
null
null
10.18653/v1/2022.findings-emnlp.237
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,749
inproceedings
tian-etal-2022-improving-english
Improving {E}nglish-{A}rabic Transliteration with Phonemic Memories
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.238/
Tian, Yuanhe and Lou, Renze and Pang, Xiangyu and Wang, Lianxi and Jiang, Shengyi and Song, Yan
Findings of the Association for Computational Linguistics: EMNLP 2022
3262--3272
Transliteration is an important task in natural language processing (NLP) which aims to convert a name in the source language to the target language without changing its pronunciation. Particularly, transliteration from English to Arabic is highly needed in many applications, especially in countries (e.g., United Arab Emirates (UAE)) whose most citizens are foreigners but the official language is Arabic. In such a task-oriented scenario, namely transliterating the English names to the corresponding Arabic ones, the performance of the transliteration model is highly important. However, most existing neural approaches mainly apply a universal transliteration model with advanced encoders and decoders to the task, where limited attention is paid to leveraging the phonemic association between English and Arabic to further improve model performance. In this paper, we focus on transliteration of people`s names from English to Arabic for the general public. In doing so, we collect a corpus named EANames by extracting high quality name pairs from online resources which better represent the names in the general public than linked Wikipedia entries that are always names of famous people). We propose a model for English-Arabic transliteration, where a memory module modeling the phonemic association between English and Arabic is used to guide the transliteration process. We run experiments on the collected data and the results demonstrate the effectiveness of our approach for English-Arabic transliteration.
null
null
10.18653/v1/2022.findings-emnlp.238
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,750
inproceedings
pandey-etal-2022-mix
Mix-and-Match: Scalable Dialog Response Retrieval using {G}aussian Mixture Embeddings
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.239/
Pandey, Gaurav and Contractor, Danish and Joshi, Sachindra
Findings of the Association for Computational Linguistics: EMNLP 2022
3273--3287
Embedding-based approaches for dialog response retrieval embed the context-response pairs as points in the embedding space. These approaches are scalable, but fail to account for the complex, many-to-many relationships that exist between context-response pairs. On the other end of the spectrum, there are approaches that feed the context-response pairs jointly through multiple layers of neural networks. These approaches can model the complex relationships between context-response pairs, but fail to scale when the set of responses is moderately large ({\ensuremath{>}}1000). In this paper, we propose a scalable model that can learn complex relationships between context-response pairs. Specifically, the model maps the contexts as well as responses to probability distributions over the embedding space. We train the models by optimizing the Kullback-Leibler divergence between the distributions induced by context-response pairs in the training data. We show that the resultant model achieves better performance as compared to other embedding-based approaches on publicly available conversation data.
null
null
10.18653/v1/2022.findings-emnlp.239
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,751
inproceedings
kwon-etal-2022-alphatuning
{A}lpha{T}uning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.240/
Kwon, Se Jung and Kim, Jeonghoon and Bae, Jeongin and Yoo, Kang Min and Kim, Jin-Hwa and Park, Baeseong and Kim, Byeongwook and Ha, Jung-Woo and Sung, Nako and Lee, Dongsoo
Findings of the Association for Computational Linguistics: EMNLP 2022
3288--3305
There are growing interests in adapting large-scale language models using parameter-efficient fine-tuning methods. However, accelerating the model itself and achieving better inference efficiency through model compression has not been thoroughly explored yet.Model compression could provide the benefits of reducing memory footprints, enabling low-precision computations, and ultimately achieving cost-effective inference.To combine parameter-efficient adaptation and model compression, we propose AlphaTuning consisting of post-training quantization of the pre-trained language model and fine-tuning only some parts of quantized parameters for a target task.Specifically, AlphaTuning works by employing binary-coding quantization, which factorizes the full-precision parameters into binary parameters and a separate set of scaling factors.During the adaptation phase, the binary values are frozen for all tasks, while the scaling factors are fine-tuned for the downstream task.We demonstrate that AlphaTuning, when applied to GPT-2 and OPT, performs competitively with full fine-tuning on a variety of downstream tasks while achieving {\ensuremath{>}}10x compression ratio under 4-bit quantization and {\ensuremath{>}}1,000x reduction in the number of trainable parameters.
null
null
10.18653/v1/2022.findings-emnlp.240
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,752
inproceedings
hai-etal-2022-learning
Learning Invariant Representation Improves Robustness for {MRC} Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.241/
Hai, Yu and Wen, Liang and Meng, Haoran and Liu, Tianyu and Wang, Houfeng
Findings of the Association for Computational Linguistics: EMNLP 2022
3306--3314
The prosperity of Pretrained Language Models(PLM) has greatly promoted the development of Machine Reading Comprehension (MRC). However, these models are vulnerable and not robust to adversarial examples. In this paper, we propose Stable and Contrastive Question Answering (SCQA) to improve invariance of representation to alleviate these robustness issues. Specifically, we first construct positive example pairs which have same answer through data augmentation. Then SCQA learns enhanced representations with better alignment between positive pairs by introducing stability and contrastive loss. Experimental results show that our approach can boost the robustness of QA models cross different MRC tasks and attack sets significantly and consistently.
null
null
10.18653/v1/2022.findings-emnlp.241
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,753
inproceedings
joshi-etal-2022-er
{ER}-Test: Evaluating Explanation Regularization Methods for Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.242/
Joshi, Brihi and Chan, Aaron and Liu, Ziyi and Nie, Shaoliang and Sanjabi, Maziar and Firooz, Hamed and Ren, Xiang
Findings of the Association for Computational Linguistics: EMNLP 2022
3315--3336
By explaining how humans would solve a given task, human rationales can provide strong learning signal for neural language models (NLMs). Explanation regularization (ER) aims to improve NLM generalization by pushing the NLM`s machine rationales (Which input tokens did the NLM focus on?) to align with human rationales (Which input tokens would humans focus on). Though prior works primarily study ER via in-distribution (ID) evaluation, out-of-distribution (OOD) generalization is often more critical in real-world scenarios, yet ER`s effect on OOD generalization has been underexplored.In this paper, we introduce ER-Test, a framework for evaluating ER models' OOD generalization along three dimensions: unseen datasets, contrast set tests, and functional tests. Using ER-Test, we comprehensively analyze how ER models' OOD generalization varies with the rationale alignment criterion (loss function), human rationale type (instance-level v/s task-level), number and choice of rationale-annotated instances, and time budget for rationale annotation. Across two tasks and six datasets, we show that ER has little impact on ID performance but yields large OOD performance gains, with the best ER criterion being task-dependent. Also, ER can improve OOD performance even with task-level or few human rationales. Finally, we find that rationale annotation is more time-efficient than label annotation for improving OOD performance. Our results with ER-Test help demonstrate ER`s utility and establish best practices for using ER effectively.
null
null
10.18653/v1/2022.findings-emnlp.242
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,754
inproceedings
zhao-etal-2022-learning-cooperative
Learning Cooperative Interactions for Multi-Overlap Aspect Sentiment Triplet Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.243/
Zhao, Shiman and Chen, Wei and Wang, Tengjiao
Findings of the Association for Computational Linguistics: EMNLP 2022
3337--3347
Aspect sentiment triplet extraction (ASTE) is an essential task, which aims to extract triplets(aspect, opinion, sentiment). However, overlapped triplets, especially multi-overlap triplets,make ASTE a challenge. Most existing methods suffer from multi-overlap triplets becausethey focus on the single interactions between an aspect and an opinion. To solve the aboveissues, we propose a novel multi-overlap triplet extraction method, which decodes the complexrelations between multiple aspects and opinions by learning their cooperative interactions. Overall, the method is based on an encoder-decoder architecture. During decoding, we design ajoint decoding mechanism, which employs a multi-channel strategy to generate aspects andopinions through the cooperative interactions between them jointly. Furthermore, we constructa correlation-enhanced network to reinforce the interactions between related aspectsand opinions for sentiment prediction. Besides, a relation-wise calibration scheme is adoptedto further improve performance. Experiments show that our method outperforms baselines,especially multi-overlap triplets.
null
null
10.18653/v1/2022.findings-emnlp.243
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,755
inproceedings
yi-etal-2022-different
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Parameter-Efficient Tuning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.244/
Yi, Jing and Chen, Weize and Qin, Yujia and Lin, Yankai and Ding, Ning and Han, Xu and Liu, Zhiyuan and Sun, Maosong and Zhou, Jie
Findings of the Association for Computational Linguistics: EMNLP 2022
3348--3366
Delta tuning (DET, also known as parameter-efficient tuning) is deemed as the new paradigm for using pre-trained language models (PLMs). Up to now, various DETs with distinct design elements have been proposed, achieving performance on par with fine-tuning. However, the mechanisms behind the above success are still under-explored, especially the connections among various DETs. To fathom the mystery, we hypothesize that the adaptations of different DETs could all be reparameterized as low-dimensional optimizations in a unified optimization subspace, which could be found by jointly decomposing independent solutions of different DETs. Then we explore the connections among different DETs by conducting optimization within the subspace. In experiments, we find that, for a certain DET, conducting optimization simply in the subspace could achieve comparable performance to its original space, and the found solution in the subspace could be transferred to another DET and achieve non-trivial performance. We also visualize the performance landscape of the subspace, and find that, there exists a substantial region where different DETs all perform well. Finally, we extend our analysis and show the strong connections between fine-tuning and DETs. The codes are publicly available at https://github.com/thunlp/Unified-DeltaTuning.
null
null
10.18653/v1/2022.findings-emnlp.244
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,756
inproceedings
gunaratna-etal-2022-explainable
Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.245/
Gunaratna, Kalpa and Srinivasan, Vijay and Yerukola, Akhila and Jin, Hongxia
Findings of the Association for Computational Linguistics: EMNLP 2022
3367--3378
Joint intent detection and slot filling is a key research topic in natural language understanding (NLU). Existing joint intent and slot filling systems analyze and compute features collectively for all slot types, and importantly, have no way to explain the slot filling model decisions. In this work, we propose a novel approach that: (i) learns to generate additional slot type specific features in order to improve accuracy and (ii) provides explanations for slot filling decisions for the first time in a joint NLU model. We perform an additional constrained supervision using a set of binary classifiers for the slot type specific feature learning, thus ensuring appropriate attention weights are learned in the process to explain slot filling decisions for utterances. Our model is inherently explainable and does not need any post-hoc processing. We evaluate our approach on two widely used datasets and show accuracy improvements. Moreover, a detailed analysis is also provided for the exclusive slot explainability.
null
null
10.18653/v1/2022.findings-emnlp.245
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,757
inproceedings
fang-etal-2022-pseudoreasoner
{P}seudo{R}easoner: Leveraging Pseudo Labels for Commonsense Knowledge Base Population
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.246/
Fang, Tianqing and Do, Quyet V. and Zhang, Hongming and Song, Yangqiu and Wong, Ginny Y. and See, Simon
Findings of the Association for Computational Linguistics: EMNLP 2022
3379--3394
Commonsense Knowledge Base (CSKB) Population aims at reasoning over unseen entities and assertions on CSKBs, and is an important yet hard commonsense reasoning task. One challenge is that it requires out-of-domain generalization ability as the source CSKB for training is of a relatively smaller scale (1M) while the whole candidate space for population is way larger (200M). We propose PseudoReasoner, a semi-supervised learning framework for CSKB population that uses a teacher model pre-trained on CSKBs to provide pseudo labels on the unlabeled candidate dataset for a student model to learn from. The teacher can be a generative model rather than restricted to discriminative models as previous works.In addition, we design a new filtering procedure for pseudo labels based on influence function and the student model`s prediction to further improve the performance. The framework can improve the backbone model KG-BERT (RoBERTa-large) by 3.3 points on the overall performance and especially, 5.3 points on the out-of-domain performance, and achieves the state-of-the-art. The codes will be made public on acceptance. Codes and data are available at https://github.com/HKUST-KnowComp/PseudoReasoner.
null
null
10.18653/v1/2022.findings-emnlp.246
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,758
inproceedings
zhang-etal-2022-history
History-Aware Hierarchical Transformer for Multi-session Open-domain Dialogue System
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.247/
Zhang, Tong and Liu, Yong and Li, Boyang and Zeng, Zhiwei and Wang, Pengwei and You, Yuan and Miao, Chunyan and Cui, Lizhen
Findings of the Association for Computational Linguistics: EMNLP 2022
3395--3407
With the evolution of pre-trained language models, current open-domain dialogue systems have achieved great progress in conducting one-session conversations. In contrast, Multi-Session Conversation (MSC), which consists of multiple sessions over a long term with the same user, is under-investigated. In this paper, we propose History-Aware Hierarchical Transformer (HAHT) for multi-session open-domain dialogue. HAHT maintains a long-term memory of history conversations and utilizes history information to understand current conversation context and generate well-informed and context-relevant responses. Specifically, HAHT first encodes history conversation sessions hierarchically into a history memory. Then, HAHT leverages historical information to facilitate the understanding of the current conversation context by encoding the history memory together with the current context with attention-based mechanisms. Finally, to explicitly utilize historical information, HAHT uses a history-aware response generator that switches between a generic vocabulary and a history-aware vocabulary. Experimental results on a large-scale MSC dataset suggest that the proposed HAHT model consistently outperforms baseline models. Human evaluation results support that HAHT generates more human-like, context-relevant, and history-relevant responses than baseline models.
null
null
10.18653/v1/2022.findings-emnlp.247
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,759
inproceedings
wang-etal-2022-guiding
Guiding Abstractive Dialogue Summarization with Content Planning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.248/
Wang, Ye and Wan, Xiaojun and Cai, Zhiping
Findings of the Association for Computational Linguistics: EMNLP 2022
3408--3413
Abstractive dialogue summarization has recently been receiving more attention. We propose a coarse-to-fine model for generating abstractive dialogue summaries, and introduce a fact-aware reinforcement learning (RL) objective that improves the fact consistency between the dialogue and the generated summary. Initially, the model generates the predicate-argument spans of the dialogue, and then generates the final summary through a fact-aware RL objective. Extensive experiments and analysis on two benchmark datasets demonstrate that our proposed method effectively improves the quality of the generated summary, especially in coherence and consistency.
null
null
10.18653/v1/2022.findings-emnlp.248
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,760
inproceedings
hewitt-etal-2022-truncation
Truncation Sampling as Language Model Desmoothing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.249/
Hewitt, John and Manning, Christopher and Liang, Percy
Findings of the Association for Computational Linguistics: EMNLP 2022
3414--3427
Long samples of text from neural language models can be of poor quality. Truncation sampling algorithms{--}like top-p or top-k{---}address this by setting some words' probabilities to zero at each step. This work investigates why these methods are important, and how to improve them. We propose thinking of a neural language model as a mixture of a true distribution and a smoothing distribution that avoids infinite perplexity. In this light, truncation algorithms aim to perform desmoothing, estimating a subset of the support of the true distribution. Finding a good subset is crucial: we show that top-p unnecessarily truncates high-probability words, for example causing it to truncate all words but Trump for a document that starts with Donald. We introduce eta-sampling, which truncates words below an entropy-dependent probability threshold. Compared to previous algorithms, our eta-sampling generates more plausible long documents according to humans, is better at breaking out of repetition, and behaves more reasonably on a battery of test distributions.
null
null
10.18653/v1/2022.findings-emnlp.249
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,761
inproceedings
yu-etal-2022-knowledge
Knowledge-grounded Dialog State Tracking
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.250/
Yu, Dian and Wang, Mingqiu and Cao, Yuan and El Shafey, Laurent and Shafran, Izhak and Soltau, Hagen
Findings of the Association for Computational Linguistics: EMNLP 2022
3428--3435
Knowledge (including structured knowledge such as schema and ontology and unstructured knowledge such as web corpus) is a critical part of dialog understanding, especially for unseen tasks and domains. Traditionally, such domain-specific knowledge is encoded implicitly into model parameters for the execution of downstream tasks, which makes training inefficient. In addition , such models are not easily transferable to new tasks with different schemas. In this work, we propose to perform dialog state tracking grounded on knowledge encoded externally. We query relevant knowledge of various forms based on the dialog context where such information can grounds the prediction of dialog states. We demonstrate superior performance of our proposed method over strong baselines, especially in the few-shot learning setting.
null
null
10.18653/v1/2022.findings-emnlp.250
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,762
inproceedings
wu-etal-2022-context
Context-aware Information-theoretic Causal De-biasing for Interactive Sequence Labeling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.251/
Wu, Junda and Wang, Rui and Yu, Tong and Zhang, Ruiyi and Zhao, Handong and Li, Shuai and Henao, Ricardo and Nenkova, Ani
Findings of the Association for Computational Linguistics: EMNLP 2022
3436--3448
Supervised training of existing deep learning models for sequence labeling relies on large scale labeled datasets. Such datasets are generally created with crowd-source labeling. However, crowd-source labeling for tasks of sequence labeling can be expensive and time-consuming. Further, crowd-source labeling by external annotators may not be appropriate for data that contains user private information. Considering the above limitations of crowd-source labeling, we study interactive sequence labeling that allows training directly with the user feedback, which alleviates the annotation cost and maintains the user privacy. We identify two bias, namely, context bias and feedback bias, by formulating interactive sequence labeling via a Structural Causal Model (SCM). To alleviate the context and feedback bias based on the SCM, we identify the frequent context tokens as confounders in the backdoor adjustment and further propose an entropy-based modulation that is inspired by information theory. entities more sample-efficiently. With extensive experiments, we validate that our approach can effectively alleviate the biases and our models can be efficiently learnt with the user feedback.
null
null
10.18653/v1/2022.findings-emnlp.251
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,763
inproceedings
luo-etal-2022-simple-challenging
Simple but Challenging: Natural Language Inference Models Fail on Simple Sentences
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.252/
Luo, Cheng and Liu, Wei and Lin, Jieyu and Zou, Jiajie and Xiang, Ming and Ding, Nai
Findings of the Association for Computational Linguistics: EMNLP 2022
3449--3462
Natural language inference (NLI) is a task to infer the relationship between a premise and a hypothesis (e.g., entailment, neutral, or contradiction), and transformer-based models perform well on current NLI datasets such as MNLI and SNLI. Nevertheless, given the linguistic complexity of the large-scale datasets, it remains controversial whether these models can truly infer the relationship between sentences or they simply guess the answer via shallow heuristics. Here, we introduce a controlled evaluation set called Simple Pair to test the basic sentence inference ability of NLI models using sentences with syntactically simple structures. Three popular transformer-based models, i.e., BERT, RoBERTa, and DeBERTa, are employed. We find that these models fine-tuned on MNLI or SNLI perform very poorly on Simple Pair ({\ensuremath{<}} 35.4{\%} accuracy). Further analyses reveal event coreference and compositional binding problems in these models. To improve the model performance, we augment the training set, i.e., MNLI or SNLI, with a few examples constructed based on Simple Pair ( 1{\%} of the size of the original SNLI/MNLI training sets). Models fine-tuned on the augmented training set maintain high performance on MNLI/SNLI and perform very well on Simple Pair ({\textasciitilde}100{\%} accuracy). Furthermore, the positive performance of the augmented training models can transfer to more complex examples constructed based on sentences from MNLI and SNLI. Taken together, the current work shows that (1) models achieving high accuracy on mainstream large-scale datasets still lack the capacity to draw accurate inferences on simple sentences, and (2) augmenting mainstream datasets with a small number of target simple sentences can effectively improve model performance.
null
null
10.18653/v1/2022.findings-emnlp.252
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,764
inproceedings
guo-etal-2022-dore
{DORE}: Document Ordered Relation Extraction based on Generative Framework
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.253/
Guo, Qipeng and Yang, Yuqing and Yan, Hang and Qiu, Xipeng and Zhang, Zheng
Findings of the Association for Computational Linguistics: EMNLP 2022
3463--3474
In recent years, there is a surge of generation-based information extraction work, which allows a more direct use of pre-trained language models and efficiently captures output dependencies. However, previous generative methods using lexical representation do not naturally fit document-level relation extraction (DocRE) where there are multiple entities and relational facts. In this paper, we investigate the root cause of the underwhelming performance of the existing generative DocRE models and discover that the culprit is the inadequacy of the training paradigm, instead of the capacities of the models. We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn. Moreover, we design a parallel row generation method to process overlong target sequences. Besides, we introduce several negative sampling strategies to improve the performance with balanced signals. Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models.
null
null
10.18653/v1/2022.findings-emnlp.253
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,765
inproceedings
ding-etal-2022-explicit
Explicit Role Interaction Network for Event Argument Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.254/
Ding, Nan and Hu, Chunming and Sun, Kai and Mensah, Samuel and Zhang, Richong
Findings of the Association for Computational Linguistics: EMNLP 2022
3475--3485
Event argument extraction is a challenging subtask of event extraction, aiming to identify and assign roles to arguments under a certain event. Existing methods extract arguments of each role independently, ignoring the relationship between different roles. Such an approach hinders the model from learning explicit interactions between different roles to improve the performance of individual argument extraction. As a solution, we design a neural model that we refer to as the Explicit Role Interaction Network (ERIN) which allows for dynamically capturing the correlations between different argument roles within an event. Extensive experiments on the benchmark dataset ACE2005 demonstrate the superiority of our proposed model to existing approaches.
null
null
10.18653/v1/2022.findings-emnlp.254
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,766
inproceedings
yordanov-etal-2022-shot
Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations in a Label-Abundant Setup
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.255/
Yordanov, Yordan and Kocijan, Vid and Lukasiewicz, Thomas and Camburu, Oana-Maria
Findings of the Association for Computational Linguistics: EMNLP 2022
3486--3501
Training a model to provide natural language explanations (NLEs) for its predictions usually requires the acquisition of task-specific NLEs, which is time- and resource-consuming. A potential solution is the few-shot out-of-domain transfer of NLEs from a parent task with many NLEs to a child task.In this work, we examine the setup in which the child task has few NLEs but abundant labels. We establish four few-shot transfer learning methods that cover the possible fine-tuning combinations of the labels and NLEs for the parent and child tasks. We transfer explainability from a large natural language inference dataset (e-SNLI) separately to two child tasks: (1) hard cases of pronoun resolution, where we introduce the small-e-WinoGrande dataset of NLEs on top of the WinoGrande dataset, and (2) commonsense validation (ComVE). Our results demonstrate that the parent task helps with NLE generation and we establish the best methods for this setup.
null
null
10.18653/v1/2022.findings-emnlp.255
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,767