entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
qixiang-etal-2022-exploiting
Exploiting domain-slot related keywords description for Few-Shot Cross-Domain Dialogue State Tracking
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.157/
Qixiang, Gao and Dong, Guanting and Mou, Yutao and Wang, Liwen and Zeng, Chen and Guo, Daichi and Sun, Mingyang and Xu, Weiran
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2460--2465
Collecting dialogue data with domain-slot-value labels for dialogue state tracking (DST) could be a costly process. In this paper, we propose a novel framework based on domain-slot related description to tackle the challenge of few-shot cross-domain DST. Specifically, we design an extraction module to extract domain-slot related verbs and nouns in the dialogue. Then, we integrates them into the description, which aims to prompt the model to identify the slot information. Furthermore, we introduce a random sampling strategy to improve the domain generalization ability of the model. We utilize a pre-trained model to encode contexts and description and generates answers with an auto-regressive manner. Experimental results show that our approaches substantially outperform the existing few-shot DST methods on MultiWOZ and gain strong improvements on the slot accuracy comparing to existing slot description methods.
null
null
10.18653/v1/2022.emnlp-main.157
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,287
inproceedings
mondal-etal-2022-cocoa
{C}o{C}oa: An Encoder-Decoder Model for Controllable Code-switched Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.158/
Mondal, Sneha and ., Ritika and Pathak, Shreya and Jyothi, Preethi and Raghuveer, Aravindan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2466--2479
Code-switching has seen growing interest in recent years as an important multilingual NLP phenomenon. Generating code-switched text for data augmentation has been sufficiently well-explored. However, there is no prior work on generating code-switched text with fine-grained control on the degree of code-switching and the lexical choices used to convey formality. We present CoCoa, an encoder-decoder translation model that converts monolingual Hindi text to Hindi-English code-switched text with both encoder-side and decoder-side interventions to achieve fine-grained controllable generation. CoCoa can be invoked at test-time to synthesize code-switched text that is simultaneously faithful to syntactic and lexical attributes relevant to code-switching. CoCoa outputs were subjected to rigorous subjective and objective evaluations. Human evaluations establish that our outputs are of superior quality while being faithful to desired attributes. We show significantly improved BLEU scores when compared with human-generated code-switched references. Compared to competitive baselines, we show 10{\%} reduction in perplexity on a language modeling task and also demonstrate clear improvements on a downstream code-switched sentiment analysis task.
null
null
10.18653/v1/2022.emnlp-main.158
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,288
inproceedings
hershcovich-etal-2022-towards
Towards Climate Awareness in {NLP} Research
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.159/
Hershcovich, Daniel and Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2480--2494
The climate impact of AI, and NLP research in particular, has become a serious issue given the enormous amount of energy that is increasingly being used for training and running computational models. Consequently, increasing focus is placed on efficient NLP. However, this important initiative lacks simple guidelines that would allow for systematic climate reporting of NLP research. We argue that this deficiency is one of the reasons why very few publications in NLP report key figures that would allow a more thorough examination of environmental impact, and present a quantitative survey to demonstrate this. As a remedy, we propose a climate performance model card with the primary purpose of being practically usable with only limited information about experiments and the underlying computer hardware. We describe why this step is essential to increase awareness about the environmental impact of NLP research and, thereby, paving the way for more thorough discussions.
null
null
10.18653/v1/2022.emnlp-main.159
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,289
inproceedings
kottur-etal-2022-navigating
Navigating Connected Memories with a Task-oriented Dialog System
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.160/
Kottur, Satwik and Moon, Seungwhan and Geramifard, Alborz and Damavandi, Babak
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2495--2507
Recent years have seen an increasing trend in the volume of personal media captured by users, thanks to the advent of smartphones and smart glasses, resulting in large media collections. Despite conversation being an intuitive human-computer interface, current efforts focus mostly on single-shot natural language based media retrieval to aid users query their media and re-live their memories. This severely limits the search functionality as users can neither ask follow-up queries nor obtain information without first formulating a single-turn query.In this work, we propose dialogs for connected memories as a powerful tool to empower users to search their media collection through a multi-turn, interactive conversation. Towards this, we collect a new task-oriented dialog dataset COMET, which contains 11.5k user{\ensuremath{\leftrightarrow}}assistant dialogs (totalling 103k utterances), grounded in simulated personal memory graphs. We employ a resource-efficient, two-phase data collection pipeline that uses: (1) a novel multimodal dialog simulator that generates synthetic dialog flows grounded in memory graphs, and, (2) manual paraphrasing to obtain natural language utterances. We analyze COMET, formulate four main tasks to benchmark meaningful progress, and adopt state-of-the-art language models as strong baselines, in order to highlight the multimodal challenges captured by our dataset.
null
null
10.18653/v1/2022.emnlp-main.160
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,290
inproceedings
zhang-2022-language
Language Model Decomposition: Quantifying the Dependency and Correlation of Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.161/
Zhang, Hao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2508--2517
Pre-trained language models (LMs), such as BERT (Devlin et al., 2018) and its variants, have led to significant improvements on various NLP tasks in past years. However, a theoretical framework for studying their relationships is still missing. In this paper, we fill this gap by investigating the linear dependency between pre-trained LMs. The linear dependency of LMs is defined analogously to the linear dependency of vectors. We propose Language Model Decomposition (LMD) to represent a LM using a linear combination of other LMs as basis, and derive the closed-form solution. A goodness-of-fit metric for LMD similar to the coefficient of determination is defined and used to measure the linear dependency of a set of LMs. In experiments, we find that BERT and eleven (11) BERT-like LMs are 91{\%} linearly dependent. This observation suggests that current state-of-the-art (SOTA) LMs are highly {\textquotedblleft}correlated{\textquotedblright}. To further advance SOTA we need more diverse and novel LMs that are less dependent on existing LMs.
null
null
10.18653/v1/2022.emnlp-main.161
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,291
inproceedings
zhang-etal-2022-syngec
{S}yn{GEC}: Syntax-Enhanced Grammatical Error Correction with a Tailored {GEC}-Oriented Parser
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.162/
Zhang, Yue and Zhang, Bo and Li, Zhenghua and Bao, Zuyi and Li, Chen and Zhang, Min
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2518--2531
This work proposes a syntax-enhanced grammatical error correction (GEC) approach named SynGEC that effectively incorporates dependency syntactic information into the encoder part of GEC models. The key challenge for this idea is that off-the-shelf parsers are unreliable when processing ungrammatical sentences. To confront this challenge, we propose to build a tailored GEC-oriented parser (GOPar) using parallel GEC training data as a pivot. First, we design an extended syntax representation scheme that allows us to represent both grammatical errors and syntax in a unified tree structure. Then, we obtain parse trees of the source incorrect sentences by projecting trees of the target correct sentences. Finally, we train GOPar with such projected trees. For GEC, we employ the graph convolution network to encode source-side syntactic information produced by GOPar, and fuse them with the outputs of the Transformer encoder. Experiments on mainstream English and Chinese GEC datasets show that our proposed SynGEC approach consistently and substantially outperforms strong baselines and achieves competitive performance. Our code and data are all publicly available at https://github.com/HillZhang1999/SynGEC.
null
null
10.18653/v1/2022.emnlp-main.162
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,292
inproceedings
ousidhoum-etal-2022-varifocal
Varifocal Question Generation for Fact-checking
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.163/
Ousidhoum, Nedjma and Yuan, Zhangdie and Vlachos, Andreas
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2532--2544
Fact-checking requires retrieving evidence related to a claim under investigation. The task can be formulated as question generation based on a claim, followed by question answering.However, recent question generation approaches assume that the answer is known and typically contained in a passage given as input,whereas such passages are what is being sought when verifying a claim.In this paper, we present \textit{Varifocal}, a method that generates questions based on different focal points within a given claim, i.e. different spans of the claim and its metadata, such as its source and date.Our method outperforms previous work on a fact-checking question generation dataset on a wide range of automatic evaluation metrics.These results are corroborated by our manual evaluation, which indicates that our method generates more relevant and informative questions.We further demonstrate the potential of focal points in generating sets of clarification questions for product descriptions.
null
null
10.18653/v1/2022.emnlp-main.163
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,293
inproceedings
marchisio-etal-2022-bilingual
Bilingual Lexicon Induction for Low-Resource Languages using Graph Matching via Optimal Transport
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.164/
Marchisio, Kelly and Saad-Eldin, Ali and Duh, Kevin and Priebe, Carey and Koehn, Philipp
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2545--2561
Bilingual lexicons form a critical component of various natural language processing applications, including unsupervised and semisupervised machine translation and crosslingual information retrieval. In this work, we improve bilingual lexicon induction performance across 40 language pairs with a graph-matching method based on optimal transport. The method is especially strong with low amounts of supervision.
null
null
10.18653/v1/2022.emnlp-main.164
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,294
inproceedings
gururangan-etal-2022-whose
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.165/
Gururangan, Suchin and Card, Dallas and Dreier, Sarah and Gade, Emily and Wang, Leroy and Wang, Zeyu and Zettlemoyer, Luke and Smith, Noah A.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2562--2580
Language models increasingly rely on massive web crawls for diverse text data. However, these sources are rife with undesirable content. As such, resources like Wikipedia, books, and news often serve as anchors for automatically selecting web text most suitable for language modeling, a process typically referred to as quality filtering. Using a new dataset of U.S. high school newspaper articles{---}written by students from across the country{---}we investigate whose language is preferred by the quality filter used for GPT-3. We find that newspapers from larger schools, located in wealthier, educated, and urban zones (ZIP codes) are more likely to be classified as high quality. We also show that this quality measurement is unaligned with other sensible metrics, such as factuality or literary acclaim. We argue that privileging any corpus as high quality entails a language ideology, and more care is needed to construct training corpora for language models, with better transparency and justification for the inclusion or exclusion of various texts.
null
null
10.18653/v1/2022.emnlp-main.165
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,295
inproceedings
xu-etal-2022-conreader
{C}on{R}eader: Exploring Implicit Relations in Contracts for Contract Clause Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.166/
Xu, Weiwen and Deng, Yang and Lei, Wenqiang and Zhao, Wenlong and Chua, Tat-Seng and Lam, Wai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2581--2594
We study automatic Contract Clause Extraction (CCE) by modeling implicit relations in legal contracts. Existing CCE methods mostly treat contracts as plain text, creating a substantial barrier to understanding contracts of high complexity. In this work, we first comprehensively analyze the complexity issues of contracts and distill out three implicit relations commonly found in contracts, namely, 1) Long-range Context Relation that captures the correlations of distant clauses; 2) Term-Definition Relation that captures the relation between important terms with their corresponding definitions, and 3) Similar Clause Relation that captures the similarities between clauses of the same type. Then we propose a novel framework ConReader to exploit the above three relations for better contract understanding and improving CCE. Experimental results show that ConReader makes the prediction more interpretable and achieves new state-of-the-art on two CCE tasks in both conventional and zero-shot settings.
null
null
10.18653/v1/2022.emnlp-main.166
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,296
inproceedings
christopoulou-etal-2022-training
Training Dynamics for Curriculum Learning: A Study on Monolingual and Cross-lingual {NLU}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.167/
Christopoulou, Fenia and Lampouras, Gerasimos and Iacobacci, Ignacio
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2595--2611
Curriculum Learning (CL) is a technique of training models via ranking examples in a typically increasing difficulty trend with the aim of accelerating convergence and improving generalisability. Current approaches for Natural Language Understanding (NLU) tasks use CL to improve in-distribution data performance often via heuristic-oriented or task-agnostic difficulties. In this work, instead, we employ CL for NLU by taking advantage of training dynamics as difficulty metrics, i.e., statistics that measure the behavior of the model at hand on specific task-data instances during training and propose modifications of existing CL schedulers based on these statistics. Differently from existing works, we focus on evaluating models on in-distribution (ID), out-of-distribution (OOD) as well as zero-shot (ZS) cross-lingual transfer datasets. We show across several NLU tasks that CL with training dynamics can result in better performance mostly on zero-shot cross-lingual transfer and OOD settings with improvements up by 8.5{\%} in certain cases. Overall, experiments indicate that training dynamics can lead to better performing models with smoother training compared to other difficulty metrics while being 20{\%} faster on average. In addition, through analysis we shed light on the correlations of task-specific versus task-agnostic metrics.
null
null
10.18653/v1/2022.emnlp-main.167
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,297
inproceedings
chen-etal-2022-revisiting
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.168/
Chen, Guanzheng and Liu, Fangyu and Meng, Zaiqiao and Liang, Shangsong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2612--2626
Parameter-Efficient Tuning (PETuning) methods have been deemed by many as the new paradigm for using pretrained language models (PLMs). By tuning just a fraction amount of parameters comparing to full model finetuning, PETuning methods claim to have achieved performance on par with or even better than finetuning. In this work, we take a step back and re-examine these PETuning methods by conducting the first comprehensive investigation into the training and evaluation of them. We found the problematic validation and testing practice in current studies, when accompanied by the instability nature of PETuning methods, has led to unreliable conclusions. When being compared under a truly fair evaluation protocol, PETuning cannot yield consistently competitive performance while finetuning remains to be the best-performing method in medium- and high-resource settings. We delve deeper into the cause of the instability and observed that the number of trainable parameters and training iterations are two main factors: reducing trainable parameters and prolonging training iterations may lead to higher stability in PETuning methods.
null
null
10.18653/v1/2022.emnlp-main.168
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,298
inproceedings
zhang-etal-2022-transfer
Transfer Learning from Semantic Role Labeling to Event Argument Extraction with Template-based Slot Querying
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.169/
Zhang, Zhisong and Strubell, Emma and Hovy, Eduard
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2627--2647
In this work, we investigate transfer learning from semantic role labeling (SRL) to event argument extraction (EAE), considering their similar argument structures. We view the extraction task as a role querying problem, unifying various methods into a single framework. There are key discrepancies on role labels and distant arguments between semantic role and event argument annotations. To mitigate these discrepancies, we specify natural language-like queries to tackle the label mismatch problem and devise argument augmentation to recover distant arguments. We show that SRL annotations can serve as a valuable resource for EAE, and a template-based slot querying strategy is especially effective for facilitating the transfer. In extensive evaluations on two English EAE benchmarks, our proposed model obtains impressive zero-shot results by leveraging SRL annotations, reaching nearly 80{\%} of the fullysupervised scores. It further provides benefits in low-resource cases, where few EAE annotations are available. Moreover, we show that our approach generalizes to cross-domain and multilingual scenarios.
null
null
10.18653/v1/2022.emnlp-main.169
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,299
inproceedings
jiang-etal-2022-calibrating
Calibrating Zero-shot Cross-lingual (Un-)structured Predictions
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.170/
Jiang, Zhengping and Liu, Anqi and Van Durme, Benjamin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2648--2674
We investigate model calibration in the setting of zero-shot cross-lingual transfer with large-scale pre-trained language models. The level of model calibration is an important metric for evaluating the trustworthiness of predictive models. There exists an essential need for model calibration when natural language models are deployed in critical tasks. We study different post-training calibration methods in structured and unstructured prediction tasks. We find that models trained with data from the source language become less calibrated when applied to the target language and that calibration errors increase with intrinsic task difficulty and relative sparsity of training data. Moreover, we observe a potential connection between the level of calibration error and an earlier proposed measure of the distance from English to other languages. Finally, our comparison demonstrates that among other methods Temperature Scaling (TS) generalizes well to distant languages, but TS fails to calibrate more complex confidence estimation in structured predictions compared to more expressive alternatives like Gaussian Process Calibration.
null
null
10.18653/v1/2022.emnlp-main.170
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,300
inproceedings
xu-etal-2022-prince
{PRINCE}: Prefix-Masked Decoding for Knowledge Enhanced Sequence-to-Sequence Pre-Training
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.171/
Xu, Song and Li, Haoran and Yuan, Peng and Wu, Youzheng and He, Xiaodong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2675--2681
Pre-trained Language Models (PLMs) have shown effectiveness in various Natural Language Processing (NLP) tasks. Denoising autoencoder is one of the most successful pre-training frameworks, learning to recompose the original text given a noise-corrupted one. The existing studies mainly focus on injecting noises into the input. This paper introduces a simple yet effective pre-training paradigm, equipped with a knowledge-enhanced decoder that predicts the next entity token with noises in the prefix, explicitly strengthening the representation learning of entities that span over multiple input tokens. Specifically, when predicting the next token within an entity, we feed masks into the prefix in place of some of the previous ground-truth tokens that constitute the entity. Our model achieves new state-of-the-art results on two knowledge-driven data-to-text generation tasks with up to 2{\%} BLEU gains.
null
null
10.18653/v1/2022.emnlp-main.171
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,301
inproceedings
koh-etal-2022-far
How Far are We from Robust Long Abstractive Summarization?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.172/
Koh, Huan Yee and Ju, Jiaxin and Zhang, He and Liu, Ming and Pan, Shirui
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2682--2698
Abstractive summarization has made tremendous progress in recent years. In this work, we perform fine-grained human annotations to evaluate long document abstractive summarization systems (i.e., models and metrics) with the aim of implementing them to generate reliable summaries. For long document abstractive models, we show that the constant strive for state-of-the-art ROUGE results can lead us to generate more relevant summaries but not factual ones. For long document evaluation metrics, human evaluation results show that ROUGE remains the best at evaluating the relevancy of a summary. It also reveals important limitations of factuality metrics in detecting different types of factual errors and the reasons behind the effectiveness of BARTScore. We then suggest promising directions in the endeavor of developing factual consistency metrics. Finally, we release our annotated long document dataset with the hope that it can contribute to the development of metrics across a broader range of summarization settings.
null
null
10.18653/v1/2022.emnlp-main.172
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,302
inproceedings
liu-etal-2022-measuring
Measuring Context-Word Biases in Lexical Semantic Datasets
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.173/
Liu, Qianchu and McCarthy, Diana and Korhonen, Anna
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2699--2713
State-of-the-art pretrained contextualized models (PCM) eg. BERT use tasks such as WiC and WSD to evaluate their word-in-context representations. This inherently assumes that performance in these tasks reflect how well a model represents the coupled word and context semantics. We question this assumption by presenting the first quantitative analysis on the context-word interaction being tested in major contextual lexical semantic tasks. To achieve this, we run probing baselines on masked input, and propose measures to calculate and visualize the degree of context or word biases in existing datasets. The analysis was performed on both models and humans. Our findings demonstrate that models are usually not being tested for word-in-context semantics in the same way as humans are in these tasks, which helps us better understand the model-human gap. Specifically, to PCMs, most existing datasets fall into the extreme ends (the retrieval-based tasks exhibit strong target word bias while WiC-style tasks and WSD show strong context bias); In comparison, humans are less biased and achieve much better performance when both word and context are available than with masked input. We recommend our framework for understanding and controlling these biases for model interpretation and future task design.
null
null
10.18653/v1/2022.emnlp-main.173
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,303
inproceedings
wang-etal-2022-iteratively
Iteratively Prompt Pre-trained Language Models for Chain of Thought
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.174/
Wang, Boshi and Deng, Xiang and Sun, Huan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2714--2730
While Pre-trained Language Models (PLMs) internalize a great amount of world knowledge, they have been shown incapable of recalling these knowledge to solve tasks requiring complex {\&} multi-step reasoning. Similar to how humans develop a {\textquotedblleft}chain of thought{\textquotedblright} for these tasks, how can we equip PLMs with such abilities? In this work, we explore an iterative prompting framework, a new prompting paradigm which progressively elicits relevant knowledge from PLMs for multi-step inference. We identify key limitations of existing prompting methods, namely they are either restricted to queries with a single identifiable relation/predicate, or being agnostic to input contexts, which makes it difficult to capture variabilities across different inference steps. We propose an iterative context-aware prompter, which addresses these limitations by learning to dynamically synthesize prompts conditioned on the current step`s contexts. Experiments on three datasets involving multi-step reasoning show the effectiveness of the iterative scheme and the context-aware prompter design.
null
null
10.18653/v1/2022.emnlp-main.174
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,304
inproceedings
bogin-etal-2022-unobserved
Unobserved Local Structures Make Compositional Generalization Hard
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.175/
Bogin, Ben and Gupta, Shivanshu and Berant, Jonathan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2731--2747
While recent work has shown that sequence-to-sequence models struggle to generalize to new compositions (termed compositional generalization), little is known on what makes compositional generalization hard on a particular test instance. In this work, we investigate the factors that make generalization to certain test instances challenging. We first substantiate that some examples are more difficult than others by showing that different models consistently fail or succeed on the same test instances. Then, we propose a criterion for the difficulty of an example: a test instance is hard if it contains a local structure that was not observed at training time. We formulate a simple decision rule based on this criterion and empirically show it predicts instance-level generalization well across 5 different semantic parsing datasets, substantially better than alternative decision rules. Last, we show local structures can be leveraged for creating difficult adversarial compositional splits and also to improve compositional generalization under limited training budgets by strategically selecting examples for the training set.
null
null
10.18653/v1/2022.emnlp-main.175
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,305
inproceedings
wu-etal-2022-mitigating
Mitigating Data Sparsity for Short Text Topic Modeling by Topic-Semantic Contrastive Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.176/
Wu, Xiaobao and Luu, Anh Tuan and Dong, Xinshuai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2748--2760
To overcome the data sparsity issue in short text topic modeling, existing methods commonly rely on data augmentation or the data characteristic of short texts to introduce more word co-occurrence information. However, most of them do not make full use of the augmented data or the data characteristic: they insufficiently learn the relations among samples in data, leading to dissimilar topic distributions of semantically similar text pairs. To better address data sparsity, in this paper we propose a novel short text topic modeling framework, Topic-Semantic Contrastive Topic Model (TSCTM). To sufficiently model the relations among samples, we employ a new contrastive learning method with efficient positive and negative sampling strategies based on topic semantics. This contrastive learning method refines the representations, enriches the learning signals, and thus mitigates the sparsity issue. Extensive experimental results show that our TSCTM outperforms state-of-the-art baselines regardless of the data augmentation availability, producing high-quality topics and topic distributions.
null
null
10.18653/v1/2022.emnlp-main.176
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,306
inproceedings
li-etal-2022-back
Back to the Future: Bidirectional Information Decoupling Network for Multi-turn Dialogue Modeling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.177/
Li, Yiyang and Zhao, Hai and Zhang, Zhuosheng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2761--2774
Multi-turn dialogue modeling as a challenging branch of natural language understanding (NLU), aims to build representations for machines to understand human dialogues, which provides a solid foundation for multiple downstream tasks. Recent studies of dialogue modeling commonly employ pre-trained language models (PrLMs) to encode the dialogue history as successive tokens, which is insufficient in capturing the temporal characteristics of dialogues. Therefore, we propose Bidirectional Information Decoupling Network (BiDeN) as a universal dialogue encoder, which explicitly incorporates both the past and future contexts and can be generalized to a wide range of dialogue-related tasks. Experimental results on datasets of different downstream tasks demonstrate the universality and effectiveness of our BiDeN.
null
null
10.18653/v1/2022.emnlp-main.177
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,307
inproceedings
li-etal-2022-calibration
Calibration Meets Explanation: A Simple and Effective Approach for Model Confidence Estimates
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.178/
Li, Dongfang and Hu, Baotian and Chen, Qingcai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2775--2784
Calibration strengthens the trustworthiness of black-box models by producing better accurate confidence estimates on given examples. However, little is known about if model explanations can help confidence calibration. Intuitively, humans look at important features attributions and decide whether the model is trustworthy. Similarly, the explanations may tell us when the model might know and when it does not. Inspired by this, we propose a method named CME that leverages model explanations to make the model less confident with non-inductive attributions. The idea is that when the model is not highly confident, it is difficult to identify strong indications of any class, and the tokens accordingly do not have high attribution scores for any class and vice versa. We conduct extensive experiments on six datasets with two popular pre-trained language models in the in-domain and out-of-domain settings. The results show that CME improves calibration performance in all settings. The expected calibration errors are further reduced when combined with temperature scaling. Our findings highlight that model explanations can help calibrate posterior estimates.
null
null
10.18653/v1/2022.emnlp-main.178
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,308
inproceedings
schmidt-etal-2022-non
Non-Autoregressive Neural Machine Translation: A Call for Clarity
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.179/
Schmidt, Robin and Pires, Telmo and Peitz, Stephan and L{\"o{\"of, Jonas
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2785--2799
Non-autoregressive approaches aim to improve the inference speed of translation models by only requiring a single forward pass to generate the output sequence instead of iteratively producing each predicted token. Consequently, their translation quality still tends to be inferior to their autoregressive counterparts due to several issues involving output token interdependence. In this work, we take a step back and revisit several techniques that have been proposed for improving non-autoregressive translation models and compare their combined translation quality and speed implications under third-party testing environments. We provide novel insights for establishing strong baselines using length prediction or CTC-based architecture variants and contribute standardized BLEU, chrF++, and TER scores using sacreBLEU on four translation tasks, which crucially have been missing as inconsistencies in the use of tokenized BLEU lead to deviations of up to 1.7 BLEU points. Our open-sourced code is integrated into fairseq for reproducibility.
null
null
10.18653/v1/2022.emnlp-main.179
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,309
inproceedings
gekhman-etal-2022-red
{RED}-{ACE}: Robust Error Detection for {ASR} using Confidence Embeddings
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.180/
Gekhman, Zorik and Zverinski, Dina and Mallinson, Jonathan and Beryozkin, Genady
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2800--2808
ASR Error Detection (AED) models aim to post-process the output of Automatic Speech Recognition (ASR) systems, in order to detect transcription errors. Modern approaches usually use text-based input, comprised solely of the ASR transcription hypothesis, disregarding additional signals from the ASR model. Instead, we utilize the ASR system`s word-level confidence scores for improving AED performance. Specifically, we add an ASR Confidence Embedding (ACE) layer to the AED model`s encoder, allowing us to jointly encode the confidence scores and the transcribed text into a contextualized representation. Our experiments show the benefits of ASR confidence scores for AED, their complementary effect over the textual signal, as well as the effectiveness and robustness of ACE for combining these signals. To foster further research, we publish a novel AED dataset consisting of ASR outputs on the LibriSpeech corpus with annotated transcription errors.
null
null
10.18653/v1/2022.emnlp-main.180
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,310
inproceedings
hu-etal-2022-fast
Fast-{R}2{D}2: A Pretrained Recursive Neural Network based on Pruned {CKY} for Grammar Induction and Text Representation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.181/
Hu, Xiang and Mi, Haitao and Li, Liang and de Melo, Gerard
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2809--2821
Chart-based models have shown great potential in unsupervised grammar induction, running recursively and hierarchically, but requiring O(n{\textthreesuperior}) time-complexity. The Recursive Transformer based on Differentiable Trees (R2D2) makes it possible to scale to large language model pretraining even with a complex tree encoder, by introducing a heuristic pruning method.However, its rule-based pruning process suffers from local optima and slow inference. In this paper, we propose a unified R2D2 method that overcomes these issues. We use a top-down unsupervised parser as a model-guided pruning method, which also enables parallel encoding during inference. Our parser casts parsing as a split point scoring task by first scoring all split points for a given sentence and then using the highest-scoring one to recursively split a span into two parts. The reverse order of the splits is considered as the order of pruning in the encoder. We optimize the unsupervised parser by minimizing the Kullback{--}Leibler distance between tree probabilities from the parser and the R2D2 model.Our experiments show that our Fast-R2D2 significantly improves the grammar induction quality and achieves competitive results in downstream tasks.
null
null
10.18653/v1/2022.emnlp-main.181
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,311
inproceedings
hui-etal-2022-localized
A Localized Geometric Method to Match Knowledge in Low-dimensional Hyperbolic Space
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.182/
Hui, Bo and Xia, Tian and Ku, Wei-Shinn
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2822--2832
Matching equivalent entities across Knowledge graphs is a pivotal step for knowledge fusion. Previous approaches usually study the problem in Euclidean space. However, recent works have shown that hyperbolic space has a higher capacity than Euclidean space and hyperbolic embedding can represent the hierarchical structure in a knowledge graph. In this paper, we propose a localized geometric method to find equivalent entities in hyperbolic space. Specifically, we use a hyperbolic neural network to encode the lingual information of entities and the structure of both knowledge graphs into a low-dimensional hyperbolic space. To address the asymmetry of structure on different KGs and the localized nature of relations, we learn an instance-specific geometric mapping function based on rotation to match entity pairs. A contrastive loss function is used to train the model. The experiment verifies the power of low-dimensional hyperbolic space for entity matching and shows that our method outperforms the state of the art by a large margin.
null
null
10.18653/v1/2022.emnlp-main.182
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,312
inproceedings
madaan-etal-2022-memory
Memory-assisted prompt editing to improve {GPT}-3 after deployment
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.183/
Madaan, Aman and Tandon, Niket and Clark, Peter and Yang, Yiming
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2833--2861
Large LMs such as GPT-3 are powerful, but can commit mistakes that are obvious to humans. For example, GPT-3 would mistakenly interpret {\textquotedblleft}What word is similar to good?{\textquotedblright} to mean a homophone, while the user intended a synonym. Our goal is to effectively correct such errors via user interactions with the system but without retraining, which will be prohibitively costly. We pair GPT-3 with a growing memory of recorded cases where the model misunderstood the user`s intents, along with user feedback for clarification. Such a memory allows our system to produce enhanced prompts for any new query based on the user feedback for error correction on similar cases in the past. On four tasks (two lexical tasks, two advanced ethical reasoning tasks), we show how a (simulated) user can interactively teach a deployed GPT-3, substantially increasing its accuracy over the queries with different kinds of misunderstandings by the GPT-3. Our approach is a step towards the low-cost utility enhancement for very large pre-trained LMs.
null
null
10.18653/v1/2022.emnlp-main.183
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,313
inproceedings
guo-etal-2022-lvp
{LVP}-{M}3: Language-aware Visual Prompt for Multilingual Multimodal Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.184/
Guo, Hongcheng and Liu, Jiaheng and Huang, Haoyang and Yang, Jian and Li, Zhoujun and Zhang, Dongdong and Cui, Zheng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2862--2872
Multimodal Machine Translation (MMT) focuses on enhancing text-only translation with visual features, which has attracted considerable attention from both natural language processing and computer vision communities. Recent advances still struggle to train a separate model for each language pair, which is costly and unaffordable when the number of languages increases in the real world. In other words, the multilingual multimodal machine translation (Multilingual MMT) task has not been investigated, which aims to handle the aforementioned issues by providing a shared semantic space for multiple languages. Besides, the image modality has no language boundaries, which is superior to bridging the semantic gap between languages. To this end,we first propose the Multilingual MMT task by establishing two new Multilingual MMT benchmark datasets covering seven languages.Then, an effective baseline LVP-M3 using visual prompts is proposed to support translations between different languages,which includes three stages (token encoding, language-aware visual prompt generation, and language translation). Extensive experimental results on our constructed benchmark datasets demonstrate the effectiveness of LVP-M3 method for Multilingual MMT.
null
null
10.18653/v1/2022.emnlp-main.184
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,314
inproceedings
wang-sun-2022-promptehr
{P}rompt{EHR}: Conditional Electronic Healthcare Records Generation with Prompt Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.185/
Wang, Zifeng and Sun, Jimeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2873--2885
Accessing longitudinal multimodal Electronic Healthcare Records (EHRs) is challenging due to privacy concerns, which hinders the use of ML for healthcare applications. Synthetic EHRs generation bypasses the need to share sensitive real patient records. However, existing methods generate single-modal EHRs by unconditional generation or by longitudinal inference, which falls short of low flexibility and makes unrealistic EHRs. In this work, we propose to formulate EHRs generation as a text-to-text translation task by language models (LMs), which suffices to highly flexible event imputation during generation. We also design prompt learning to control the generation conditioned by numerical and categorical demographic features. We evaluate synthetic EHRs quality by two perplexity measures accounting for their longitudinal pattern (longitudinal imputation perplexity, lpl) and the connections cross modalities (cross-modality imputation perplexity, mpl). Moreover, we utilize two adversaries: membership and attribute inference attacks for privacy-preserving evaluation. Experiments on MIMIC-III data demonstrate the superiority of our methods on realistic EHRs generation (53.1{\%} decrease of lpl and 45.3{\%} decrease of mpl on average compared to the best baselines) with low privacy risks. Software is available at https://github.com/RyanWangZf/PromptEHR.
null
null
10.18653/v1/2022.emnlp-main.185
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,315
inproceedings
jiang-etal-2022-rose
{ROSE}: Robust Selective Fine-tuning for Pre-trained Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.186/
Jiang, Lan and Zhou, Hao and Lin, Yankai and Li, Peng and Zhou, Jie and Jiang, Rui
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2886--2897
Even though the large-scale language models have achieved excellent performances, they suffer from various adversarial attacks.A large body of defense methods has been proposed. However, they are still limited due to redundant attack search spaces and the inability to defend against various types of attacks.In this work, we present a novel fine-tuning approach called \textbf{RO}bust \textbf{SE}letive fine-tuning (\textbf{ROSE}) to address this issue.ROSE conducts selective updates when adapting pre-trained models to downstream tasks, filtering out invaluable and unrobust updates of parameters.Specifically, we propose two strategies: the first-order and second-order ROSE for selecting target robust parameters.The experimental results show that ROSE achieves significant improvements in adversarial robustness on various downstream NLP tasks, and the ensemble method even surpasses both variants above.Furthermore, ROSE can be easily incorporated into existing fine-tuning methods to improve their adversarial robustness further.The empirical analysis confirms that ROSE eliminates unrobust spurious updates during fine-tuning, leading to solutions corresponding to flatter and wider optima than the conventional method.Code is available at \url{https://github.com/jiangllan/ROSE}.
null
null
10.18653/v1/2022.emnlp-main.186
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,316
inproceedings
li-etal-2022-coderetriever
{C}ode{R}etriever: A Large Scale Contrastive Pre-Training Method for Code Search
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.187/
Li, Xiaonan and Gong, Yeyun and Shen, Yelong and Qiu, Xipeng and Zhang, Hang and Yao, Bolun and Qi, Weizhen and Jiang, Daxin and Chen, Weizhu and Duan, Nan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2898--2910
In this paper, we propose the CodeRetriever model, which learns the function-level code semantic representations through large-scale code-text contrastive pre-training. We adopt two contrastive learning schemes in CodeRetriever: unimodal contrastive learning and bimodal contrastive learning. For unimodal contrastive learning, we design an unsupervised learning approach to build semantic-related code pairs based on the documentation and function name. For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build code-text pairs. Both contrastive objectives can fully leverage large-scale code corpus for pre-training. Extensive experimental results show that CodeRetriever achieves new state-of-the-art with significant improvement over existing code pre-trained models, on eleven domain/language-specific code search tasks with six programming languages in different code granularity (function-level, snippet-level and statement-level).These results demonstrate the effectiveness and robustness of CodeRetriever.The codes and resources are available at \url{https://github.com/microsoft/AR2/tree/main/CodeRetriever}.
null
null
10.18653/v1/2022.emnlp-main.187
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,317
inproceedings
ma-etal-2022-open-topic
Open-Topic False Information Detection on Social Networks with Contrastive Adversarial Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.188/
Ma, Guanghui and Hu, Chunming and Ge, Ling and Zhang, Hong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2911--2923
Current works about false information detection based on conversation graphs on social networks focus primarily on two research streams from the standpoint of topic distribution: in-topic and cross-topic techniques, which assume that the data topic distribution is identical or cross, respectively. This signifies that all test data topics are seen or unseen by the model.However, these assumptions are too harsh for actual social networks that contain both seen and unseen topics simultaneously, hence restricting their practical application.In light of this, this paper develops a novel open-topic scenario that is better suited to actual social networks. In this open-topic scenario, we empirically find that the existing models suffer from impairment in the detection performance for seen or unseen topic data, resulting in poor overall model performance. To address this issue, we propose a novel Contrastive Adversarial Learning Network, CALN, that employs an unsupervised topic clustering method to capture topic-specific features to enhance the model`s performance for seen topics and an unsupervised adversarial learning method to align data representation distributions to enhance the model`s generalisation to unseen topics.Experiments on two benchmark datasets and a variety of graph neural networks demonstrate the effectiveness of our approach.
null
null
10.18653/v1/2022.emnlp-main.188
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,318
inproceedings
zeng-etal-2022-mitigating
Mitigating Inconsistencies in Multimodal Sentiment Analysis under Uncertain Missing Modalities
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.189/
Zeng, Jiandian and Zhou, Jiantao and Liu, Tianyi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2924--2934
For the missing modality problem in Multimodal Sentiment Analysis (MSA), the inconsistency phenomenon occurs when the sentiment changes due to the absence of a modality. The absent modality that determines the overall semantic can be considered as a key missing modality. However, previous works all ignored the inconsistency phenomenon, simply discarding missing modalities or solely generating associated features from available modalities. The neglect of the key missing modality case may lead to incorrect semantic results. To tackle the issue, we propose an Ensemble-based Missing Modality Reconstruction (EMMR) network to detect and recover semantic features of the key missing modality. Specifically, we first learn joint representations with remaining modalities via a backbone encoder-decoder network. Then, based on the recovered features, we check the semantic consistency to determine whether the absent modality is crucial to the overall sentiment polarity. Once the inconsistency problem due to the key missing modality exists, we integrate several encoder-decoder approaches for better decision making. Extensive experiments and analyses are conducted on CMU-MOSI and IEMOCAP datasets, validating the superiority of the proposed method.
null
null
10.18653/v1/2022.emnlp-main.189
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,319
inproceedings
mao-etal-2022-convtrans
{C}onv{T}rans: Transforming Web Search Sessions for Conversational Dense Retrieval
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.190/
Mao, Kelong and Dou, Zhicheng and Qian, Hongjin and Mo, Fengran and Cheng, Xiaohua and Cao, Zhao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2935--2946
Conversational search provides users with a natural and convenient new search experience. Recently, conversational dense retrieval has shown to be a promising technique for realizing conversational search. However, as conversational search systems have not been widely deployed, it is hard to get large-scale real conversational search sessions and relevance labels to support the training of conversational dense retrieval. To tackle this data scarcity problem, previous methods focus on developing better few-shot learning approaches or generating pseudo relevance labels, but the data they use for training still heavily rely on manual generation.In this paper, we present ConvTrans, a data augmentation method that can automatically transform easily-accessible web search sessions into conversational search sessions to fundamentally alleviate the data scarcity problem for conversational dense retrieval. ConvTrans eliminates the gaps between these two types of sessions in terms of session quality and query form to achieve effective session transformation. Extensive evaluations on two widely used conversational search benchmarks, i.e., CAsT-19 and CAsT-20, demonstrate that the same model trained on the data generated by ConvTrans can achieve comparable retrieval performance as it trained on high-quality but expensive artificial conversational search data.
null
null
10.18653/v1/2022.emnlp-main.190
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,320
inproceedings
xi-etal-2022-musied
{MUSIED}: A Benchmark for Event Detection from Multi-Source Heterogeneous Informal Texts
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.191/
Xi, Xiangyu and Lv, Jianwei and Liu, Shuaipeng and Ye, Wei and Yang, Fan and Wan, Guanglu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2947--2964
Event detection (ED) identifies and classifies event triggers from unstructured texts, serving as a fundamental task for information extraction. Despite the remarkable progress achieved in the past several years, most research efforts focus on detecting events from formal texts (e.g., news articles, Wikipedia documents, financial announcements). Moreover, the texts in each dataset are either from a single source or multiple yet relatively homogeneous sources. With massive amounts of user-generated text accumulating on the Web and inside enterprises, identifying meaningful events in these informal texts, usually from multiple heterogeneous sources, has become a problem of significant practical value. As a pioneering exploration that expands event detection to the scenarios involving informal and heterogeneous texts, we propose a new large-scale Chinese event detection dataset based on user reviews, text conversations, and phone conversations in a leading e-commerce platform for food service. We carefully investigate the proposed dataset`s textual informality and multi-domain heterogeneity characteristics by inspecting data samples quantitatively and qualitatively. Extensive experiments with state-of-the-art event detection methods verify the unique challenges posed by these characteristics, indicating that multi-domain informal event detection remains an open problem and requires further efforts. Our benchmark and code are released at https://github.com/myeclipse/MUSIED.
null
null
10.18653/v1/2022.emnlp-main.191
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,321
inproceedings
chen-etal-2022-reproducibility
Reproducibility Issues for {BERT}-based Evaluation Metrics
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.192/
Chen, Yanran and Belouadi, Jonas and Eger, Steffen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2965--2989
Reproducibility is of utmost concern in machine learning and natural language processing (NLP). In the field of natural language generation (especially machine translation), the seminal paper of Post (2018) has pointed out problems of reproducibility of the dominant metric, BLEU, at the time of publication. Nowadays, BERT-based evaluation metrics considerably outperform BLEU. In this paper, we ask whether results and claims from four recent BERT-based metrics can be reproduced. We find that reproduction of claims and results often fails because of (i) heavy undocumented preprocessing involved in the metrics, (ii) missing code and (iii) reporting weaker results for the baseline metrics. (iv) In one case, the problem stems from correlating not to human scores but to a wrong column in the csv file, inflating scores by 5 points. Motivated by the impact of preprocessing, we then conduct a second study where we examine its effects more closely (for one of the metrics). We find that preprocessing can have large effects, especially for highly inflectional languages. In this case, the effect of preprocessing may be larger than the effect of the aggregation mechanism (e.g., greedy alignment vs. Word Mover Distance).
null
null
10.18653/v1/2022.emnlp-main.192
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,322
inproceedings
chai-etal-2022-improving
Improving Multi-task Stance Detection with Multi-task Interaction Network
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.193/
Chai, Heyan and Tang, Siyu and Cui, Jinhao and Ding, Ye and Fang, Binxing and Liao, Qing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
2990--3000
Stance detection aims to identify people`s standpoints expressed in the text towards a target, which can provide powerful information for various downstream tasks.Recent studies have proposed multi-task learning models that introduce sentiment information to boost stance detection.However, they neglect to explore capturing the fine-grained task-specific interaction between stance detection and sentiment tasks, thus degrading performance.To address this issue, this paper proposes a novel multi-task interaction network (MTIN) for improving the performance of stance detection and sentiment analysis tasks simultaneously.Specifically, we construct heterogeneous task-related graphs to automatically identify and adapt the roles that a word plays with respect to a specific task. Also, a multi-task interaction module is designed to capture the word-level interaction between tasks, so as to obtain richer task representations.Extensive experiments on two real-world datasets show that our proposed approach outperforms state-of-the-art methods in both stance detection and sentiment analysis tasks.
null
null
10.18653/v1/2022.emnlp-main.193
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,323
inproceedings
long-etal-2022-neural
Neural-based Mixture Probabilistic Query Embedding for Answering {FOL} queries on Knowledge Graphs
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.194/
Long, Xiao and Zhuang, Liansheng and Aodi, Li and Wang, Shafei and Li, Houqiang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3001--3013
Query embedding (QE){---}which aims to embed entities and first-order logical (FOL) queries in a vector space, has shown great power in answering FOL queries on knowledge graphs (KGs). Existing QE methods divide a complex query into a sequence of mini-queries according to its computation graph and perform logical operations on the answer sets of mini-queries to get answers. However, most of them assume that answer sets satisfy an individual distribution (e.g., Uniform, Beta, or Gaussian), which is often violated in real applications and limit their performance. In this paper, we propose a Neural-based Mixture Probabilistic Query Embedding Model (NMP-QEM) that encodes the answer set of each mini-query as a mixed Gaussian distribution with multiple means and covariance parameters, which can approximate any random distribution arbitrarily well in real KGs. Additionally, to overcome the difficulty in defining the closed solution of negation operation, we introduce neural-based logical operators of projection, intersection and negation for a mixed Gaussian distribution to answer all the FOL queries. Extensive experiments demonstrate that NMP-QEM significantly outperforms existing state-of-the-art methods on benchmark datasets. In NELL995, NMP-QEM achieves a 31{\%} relative improvement over the state-of-the-art.
null
null
10.18653/v1/2022.emnlp-main.194
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,324
inproceedings
cheng-etal-2022-improving
Improving Multi-turn Emotional Support Dialogue Generation with Lookahead Strategy Planning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.195/
Cheng, Yi and Liu, Wenge and Li, Wenjie and Wang, Jiashuo and Zhao, Ruihui and Liu, Bang and Liang, Xiaodan and Zheng, Yefeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3014--3026
Providing Emotional Support (ES) to soothe people in emotional distress is an essential capability in social interactions. Most existing researches on building ES conversation systems only considered single-turn interactions with users, which was over-simplified. In comparison, multi-turn ES conversation systems can provide ES more effectively, but face several new technical challenges, including: (1) how to adopt appropriate support strategies to achieve the long-term dialogue goal of comforting the user`s emotion; (2) how to dynamically model the user`s state. In this paper, we propose a novel system MultiESC to address these issues. For strategy planning, drawing inspiration from the A* search algorithm, we propose lookahead heuristics to estimate the future user feedback after using particular strategies, which helps to select strategies that can lead to the best long-term effects. For user state modeling, MultiESC focuses on capturing users' subtle emotional expressions and understanding their emotion causes. Extensive experiments show that MultiESC significantly outperforms competitive baselines in both dialogue generation and strategy planning.
null
null
10.18653/v1/2022.emnlp-main.195
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,325
inproceedings
choubey-etal-2022-conformal
Conformal Predictor for Improving Zero-Shot Text Classification Efficiency
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.196/
Choubey, Prafulla Kumar and Bai, Yu and Wu, Chien-Sheng and Liu, Wenhao and Rajani, Nazneen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3027--3034
Pre-trained language models (PLMs) have been shown effective for zero-shot (0shot) text classification. 0shot models based on natural language inference (NLI) and next sentence prediction (NSP) employ cross-encoder architecture and infer by making a forward pass through the model for each label-text pair separately. This increases the computational cost to make inferences linearly in the number of labels. In this work, we improve the efficiency of such cross-encoder-based 0shot models by restricting the number of likely labels using another fast base classifier-based conformal predictor (CP) calibrated on samples labeled by the 0shot model. Since a CP generates prediction sets with coverage guarantees, it reduces the number of target labels without excluding the most probable label based on the 0shot model. We experiment with three intent and two topic classification datasets. With a suitable CP for each dataset, we reduce the average inference time for NLI- and NSP-based models by 25.6{\%} and 22.2{\%} respectively, without dropping performance below the predefined error rate of 1{\%}.
null
null
10.18653/v1/2022.emnlp-main.196
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,326
inproceedings
yi-etal-2022-effective
Effective and Efficient Query-aware Snippet Extraction for Web Search
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.197/
Yi, Jingwei and Wu, Fangzhao and Wu, Chuhan and Huang, Xiaolong and Jiao, Binxing and Sun, Guangzhong and Xie, Xing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3035--3046
Query-aware webpage snippet extraction is widely used in search engines to help users better understand the content of the returned webpages before clicking. The extracted snippet is expected to summarize the webpage in the context of the input query. Existing snippet extraction methods mainly rely on handcrafted features of overlapping words, which cannot capture deep semantic relationships between the query and webpages. Another idea is to extract the sentences which are most relevant to queries as snippets with existing text matching methods. However, these methods ignore the contextual information of webpages, which may be sub-optimal. In this paper, we propose an effective query-aware webpage snippet extraction method named DeepQSE. In DeepQSE, the concatenation of title, query and each candidate sentence serves as an input of query-aware sentence encoder, aiming to capture the fine-grained relevance between the query and sentences. Then, these query-aware sentence representations are modeled jointly through a document-aware relevance encoder to capture contextual information of the webpage. Since the query and each sentence are jointly modeled in DeepQSE, its online inference may be slow. Thus, we further propose an efficient version of DeepQSE, named Efficient-DeepQSE, which can significantly improve the inference speed of DeepQSE without affecting its performance. The core idea of Efficient-DeepQSE is to decompose the query-aware snippet extraction task into two stages, i.e., a coarse-grained candidate sentence selection stage where sentence representations can be cached, and a fine-grained relevance modeling stage. Experiments on two datasets validate the effectiveness and efficiency of our methods.
null
null
10.18653/v1/2022.emnlp-main.197
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,327
inproceedings
lee-etal-2022-need
You Only Need One Model for Open-domain Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.198/
Lee, Haejun and Kedia, Akhil and Lee, Jongwon and Paranjape, Ashwin and Manning, Christopher and Woo, Kyoung-Gu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3047--3060
Recent approaches to Open-domain Question Answering refer to an external knowledge base using a retriever model, optionally rerank passages with a separate reranker model and generate an answer using another reader model. Despite performing related tasks, the models have separate parameters and are weakly-coupled during training. We propose casting the retriever and the reranker as internal passage-wise attention mechanisms applied sequentially within the transformer architecture and feeding computed representations to the reader, with the hidden representations progressively refined at each stage. This allows us to use a single question answering model trained end-to-end, which is a more efficient use of model capacity and also leads to better gradient flow. We present a pre-training method to effectively train this architecture and evaluate our model on the Natural Questions and TriviaQA open datasets. For a fixed parameter budget, our model outperforms the previous state-of-the-art model by 1.0 and 0.7 exact match scores.
null
null
10.18653/v1/2022.emnlp-main.198
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,328
inproceedings
yuan-etal-2022-generative-entity
Generative Entity Typing with Curriculum Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.199/
Yuan, Siyu and Yang, Deqing and Liang, Jiaqing and Li, Zhixu and Liu, Jinxi and Huang, Jingyue and Xiao, Yanghua
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3061--3073
Entity typing aims to assign types to the entity mentions in given texts. The traditional classification-based entity typing paradigm has two unignorable drawbacks: 1) it fails to assign an entity to the types beyond the predefined type set, and 2) it can hardly handle few-shot and zero-shot situations where many long-tail types only have few or even no training instances. To overcome these drawbacks, we propose a novel generative entity typing (GET) paradigm: given a text with an entity mention, the multiple types for the role that the entity plays in the text are generated with a pre-trained language model (PLM). However, PLMs tend to generate coarse-grained types after fine-tuning upon the entity typing dataset. In addition, only the heterogeneous training data consisting of a small portion of human-annotated data and a large portion of auto-generated but low-quality data are provided for model training. To tackle these problems, we employ curriculum learning (CL) to train our GET model on heterogeneous data, where the curriculum could be self-adjusted with the self-paced learning according to its comprehension of the type granularity and data heterogeneity. Our extensive experiments upon the datasets of different languages and downstream tasks justify the superiority of our GET model over the state-of-the-art entity typing models. The code has been released on https://github.com/siyuyuan/GET.
null
null
10.18653/v1/2022.emnlp-main.199
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,329
inproceedings
he-tang-2022-setgner
{S}et{GNER}: General Named Entity Recognition as Entity Set Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.200/
He, Yuxin and Tang, Buzhou
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3074--3085
Recently, joint recognition of flat, nested and discontinuous entities has received increasing attention. Motivated by the observation that the target output of NER is essentially a set of sequences, we propose a novel entity set generation framework for general NER scenes in this paper. Different from sequence-to-sequence NER methods, our method does not force the entities to be generated in a predefined order and can get rid of the problem of error propagation and inefficient decoding. Distinguished from the set-prediction NER framework, our method treats each entity as a sequence and is capable of recognizing discontinuous mentions. Given an input sentence, the model first encodes the sentence in word-level and detects potential entity mentions based on the encoder`s output, then reconstructs entity mentions from the detected entity heads in parallel. To let the encoder of our model capture better right-to-left semantic structure, we also propose an auxiliary Inverse Generation Training task. Extensive experiments show that our model (w/o. Inverse Generation Training) outperforms state-of-the-art generative NER models by a large margin on two discontinuous NER datasets, two nested NER datasets and one flat NER dataset. Besides, the auxiliary Inverse Generation Training task is found to further improve the model`s performance on the five datasets.
null
null
10.18653/v1/2022.emnlp-main.200
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,330
inproceedings
liu-etal-2022-opinion
Opinion Summarization by Weak-Supervision from Mix-structured Data
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.201/
Liu, Yizhu and Jia, Qi and Zhu, Kenny
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3086--3096
Opinion summarization of multiple reviews suffers from the lack of reference summaries for training.Most previous approaches construct multiple reviews and their summary based on textual similarities between reviews,resulting in information mismatch between the review input and the summary. In this paper, we convert each review into a mixof structured and unstructured data, which we call opinion-aspect pairs (OAs) and implicit sentences (ISs).We propose a new method to synthesize training pairs of such mix-structured data as input and the textual summary as output,and design a summarization model with OA encoder and IS encoder.Experiments show that our approach outperforms previous methods on Yelp, Amazon and RottenTomatos datasets.
null
null
10.18653/v1/2022.emnlp-main.201
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,331
inproceedings
li-etal-2022-multi-level
Multi-level Distillation of Semantic Knowledge for Pre-training Multilingual Language Model
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.202/
Li, Mingqi and Ding, Fei and Zhang, Dan and Cheng, Long and Hu, Hongxin and Luo, Feng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3097--3106
Pre-trained multilingual language models play an important role in cross-lingual natural language understanding tasks. However, existing methods did not focus on learning the semantic structure of representation, and thus could not optimize their performance. In this paper, we propose Multi-level Multilingual Knowledge Distillation (MMKD), a novel method for improving multilingual language models. Specifically, we employ a teacher-student framework to adopt rich semantic representation knowledge in English BERT. We propose token-, word-, sentence-, and structure-level alignment objectives to encourage multiple levels of consistency between source-target pairs and correlation similarity between teacher and student models. We conduct experiments on cross-lingual evaluation benchmarks including XNLI, PAWS-X, and XQuAD. Experimental results show that MMKD outperforms other baseline models of similar size on XNLI and XQuAD and obtains comparable performance on PAWS-X. Especially, MMKD obtains significant performance gains on low-resource languages.
null
null
10.18653/v1/2022.emnlp-main.202
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,332
inproceedings
ren-etal-2022-empowering
Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.203/
Ren, Houxing and Shou, Linjun and Wu, Ning and Gong, Ming and Jiang, Daxin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3107--3121
In monolingual dense retrieval, lots of works focus on how to distill knowledge from cross-encoder re-ranker to dual-encoder retriever and these methods achieve better performance due to the effectiveness of cross-encoder re-ranker. However, we find that the performance of the cross-encoder re-ranker is heavily influenced by the number of training samples and the quality of negative samples, which is hard to obtain in the cross-lingual setting. In this paper, we propose to use a query generator as the teacher in the cross-lingual setting, which is less dependent on enough training samples and high-quality negative samples. In addition to traditional knowledge distillation, we further propose a novel enhancement method, which uses the query generator to help the dual-encoder align queries from different languages, but does not need any additional parallel sentences. The experimental results show that our method outperforms the state-of-the-art methods on two benchmark datasets.
null
null
10.18653/v1/2022.emnlp-main.203
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,333
inproceedings
wang-etal-2022-r2f
{R}2{F}: A General Retrieval, Reading and Fusion Framework for Document-level Natural Language Inference
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.204/
Wang, Hao and Cao, Yixin and Li, Yangguang and Huang, Zhen and Wang, Kun and Shao, Jing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3122--3134
Document-level natural language inference (DOCNLI) is a new challenging task in natural language processing, aiming at judging the entailment relationship between a pair of hypothesis and premise documents. Current datasets and baselines largely follow sentence-level settings, but fail to address the issues raised by longer documents. In this paper, we establish a general solution, named Retrieval, Reading and Fusion (R2F) framework, and a new setting, by analyzing the main challenges of DOCNLI: interpretability, long-range dependency, and cross-sentence inference. The basic idea of the framework is to simplify document-level task into a set of sentence-level tasks, and improve both performance and interpretability with the power of evidence. For each hypothesis sentence, the framework retrieves evidence sentences from the premise, and reads to estimate its credibility. Then the sentence-level results are fused to judge the relationship between the documents. For the setting, we contribute complementary evidence and entailment label annotation on hypothesis sentences, for interpretability study. Our experimental results show that R2F framework can obtain state-of-the-art performance and is robust for diverse evidence retrieval methods. Moreover, it can give more interpretable prediction results. Our model and code are released at https://github.com/phoenixsecularbird/R2F.
null
null
10.18653/v1/2022.emnlp-main.204
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,334
inproceedings
ghaddar-etal-2022-revisiting
Revisiting Pre-trained Language Models and their Evaluation for {A}rabic Natural Language Processing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.205/
Ghaddar, Abbas and Wu, Yimeng and Bagga, Sunyam and Rashid, Ahmad and Bibi, Khalil and Rezagholizadeh, Mehdi and Xing, Chao and Wang, Yasheng and Duan, Xinyu and Wang, Zhefeng and Huai, Baoxing and Jiang, Xin and Liu, Qun and Langlais, Phillippe
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3135--3151
There is a growing body of work in recent years to develop pre-trained language models (PLMs) for the Arabic language. This work addresses two major problems in existing Arabic PLMs that limit the progress of the Arabic NLU and NLG fields. First, existing Arabic PLMs are not well-explored and their pre-training can be improved significantly using a more methodical approach. Second, there is a lack of systematic and reproducible evaluation of these models in the literature. We revisit both the pre-training and evaluation of Arabic PLMs. In terms of pre-training, we explore the impact of the quality of the pretraining data, the size of the model, and the incorporation of character-level information on Arabic PLM. As a result, we release three new Arabic BERT-style models ( JABER, Char-JABER, and SABER), and two T5-style models (AT5S and AT5B). In terms of evaluation, we conduct a comprehensive empirical study to systematically evaluate the performance of existing state-of-the-art models on ALUE, a leaderboard-powered benchmark for Arabic NLU tasks, and on a subset of the Arabic generative tasks. We show that our models significantly outperform existing Arabic PLMs and achieve a new state-of-the-art performance on discriminative and generative Arabic NLU and NLG tasks. Our models and source code to reproduce results will be made available upon acceptance.
null
null
10.18653/v1/2022.emnlp-main.205
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,335
inproceedings
wang-etal-2022-kecp
{KECP}: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.206/
Wang, Jianing and Wang, Chengyu and Qiu, Minghui and Shi, Qiuhui and Wang, Hongbin and Huang, Jun and Gao, Ming
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3152--3163
Extractive Question Answering (EQA) is one of the most essential tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs). However, most existing approaches for MRC may perform poorly in the few-shot learning scenario. To solve this issue, we propose a novel framework named Knowledge Enhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads to PLMs, we introduce a seminal paradigm for EQA that transforms the task into a non-autoregressive Masked Language Modeling (MLM) generation problem. Simultaneously, rich semantics from the external knowledge base (KB) and the passage context support enhancing the query`s representations. In addition, to boost the performance of PLMs, we jointly train the model by the MLM and contrastive learning objectives. Experiments on multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art approaches in few-shot settings by a large margin.
null
null
10.18653/v1/2022.emnlp-main.206
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,336
inproceedings
wang-etal-2022-knowledge
Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.207/
Wang, Jianing and Huang, Wenkang and Qiu, Minghui and Shi, Qiuhui and Wang, Hongbin and Li, Xiang and Gao, Ming
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3164--3177
Knowledge-enhanced Pre-trained Language Model (PLM) has recently received significant attention, which aims to incorporate factual knowledge into PLMs. However, most existing methods modify the internal structures of fixed types of PLMs by stacking complicated modules, and introduce redundant and irrelevant factual knowledge from knowledge bases (KBs). In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM. This framework can be flexibly combined with existing mainstream PLMs. Specifically, we first construct a knowledge sub-graph from KBs for each context. Then we design multiple continuous prompts rules and transform the knowledge sub-graph into natural language prompts. To further leverage the factual knowledge from these prompts, we propose two novel knowledge-aware self-supervised tasks including prompt relevance inspection and masked prompt modeling. Extensive experiments on multiple natural language understanding (NLU) tasks show the superiority of KP-PLM over other state-of-the-art methods in both full-resource and low-resource settings. Our source codes will be released upon the acceptance of the paper.
null
null
10.18653/v1/2022.emnlp-main.207
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,337
inproceedings
shen-etal-2022-evaluation
On the Evaluation Metrics for Paraphrase Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.208/
Shen, Lingfeng and Liu, Lemao and Jiang, Haiyun and Shi, Shuming
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3178--3190
In this paper we revisit automatic metrics for paraphrase evaluation and obtain two findings that disobey conventional wisdom: (1) Reference-free metrics achieve better performance than their reference-based counterparts. (2) Most commonly used metrics do not align well with human annotation.Underlying reasons behind the above findings are explored through additional experiments and in-depth analyses.Based on the experiments and analyses, we propose ParaScore, a new evaluation metric for paraphrase generation. It possesses the merits of reference-based and reference-free metrics and explicitly models lexical divergence. Based on our analysis and improvements, our proposed reference-based outperforms than reference-free metrics.Experimental results demonstrate that ParaScore significantly outperforms existing metrics.
null
null
10.18653/v1/2022.emnlp-main.208
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,338
inproceedings
mai-etal-2022-curriculum
Curriculum Learning Meets Weakly Supervised Multimodal Correlation Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.209/
Mai, Sijie and Sun, Ya and Hu, Haifeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3191--3203
In the field of multimodal sentiment analysis (MSA), a few studies have leveraged the inherent modality correlation information stored in samples for self-supervised learning. However, they feed the training pairs in a random order without consideration of difficulty. Without human annotation, the generated training pairs of self-supervised learning often contain noise. If noisy or hard pairs are used for training at the easy stage, the model might be stuck in bad local optimum. In this paper, we inject curriculum learning into weakly supervised multimodal correlation learning. The weakly supervised correlation learning leverages the label information to generate scores for negative pairs to learn a more discriminative embedding space, where negative pairs are defined as two unimodal embeddings from different samples. To assist the correlation learning, we feed the training pairs to the model according to difficulty by the proposed curriculum learning, which consists of elaborately designed scoring and feeding functions. The scoring function computes the difficulty of pairs using pre-trained and current correlation predictors, where the pairs with large losses are defined as hard pairs. Notably, the hardest pairs are discarded in our algorithm, which are assumed as noisy pairs. Moreover, the feeding function takes the difference of correlation losses as feedback to determine the feeding actions ({\textquoteleft}stay', {\textquoteleft}step back', or {\textquoteleft}step forward'). The proposed method reaches state-of-the-art performance on MSA.
null
null
10.18653/v1/2022.emnlp-main.209
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,339
inproceedings
peng-etal-2022-rethinking
Rethinking Positional Encoding in Tree Transformer for Code Representation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.210/
Peng, Han and Li, Ge and Zhao, Yunfei and Jin, Zhi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3204--3214
Transformers are now widely used in code representation, and several recent works further develop tree Transformers to capture the syntactic structure in source code. Specifically, novel tree positional encodings have been proposed to incorporate inductive bias into Transformer.In this work, we propose a novel tree Transformer encoding node positions based on our new description method for tree structures.Technically, local and global soft bias shown in previous works is both introduced as positional encodings of our Transformer model.Our model finally outperforms strong baselines on code summarization and completion tasks across two languages, demonstrating our model`s effectiveness.Besides, extensive experiments and ablation study shows that combining both local and global paradigms is still helpful in improving model performance. We release our code at \url{https://github.com/AwdHanPeng/TreeTransformer}.
null
null
10.18653/v1/2022.emnlp-main.210
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,340
inproceedings
qi-etal-2022-rasat
{RASAT}: Integrating Relational Structures into Pretrained {S}eq2{S}eq Model for Text-to-{SQL}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.211/
Qi, Jiexing and Tang, Jingyao and He, Ziwei and Wan, Xiangpeng and Cheng, Yu and Zhou, Chenghu and Wang, Xinbing and Zhang, Quanshi and Lin, Zhouhan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3215--3229
Relational structures such as schema linking and schema encoding have been validated as a key component to qualitatively translating natural language into SQL queries. However, introducing these structural relations comes with prices: they often result in a specialized model structure, which largely prohibits using large pretrained models in text-to-SQL. To address this problem, we propose RASAT: a Transformer seq2seq architecture augmented with relation-aware self-attention that could leverage a variety of relational structures while inheriting the pretrained parameters from the T5 model effectively. Our model can incorporate almost all types of existing relations in the literature, and in addition, we propose introducing co-reference relations for the multi-turn scenario. Experimental results on three widely used text-to-SQL datasets, covering both single-turn and multi-turn scenarios, have shown that RASAT could achieve competitive results in all three benchmarks, achieving state-of-the-art execution accuracy (75.5{\%} EX on Spider, 52.6{\%} IEX on SParC, and 37.4{\%} IEX on CoSQL).
null
null
10.18653/v1/2022.emnlp-main.211
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,341
inproceedings
zhai-etal-2022-com
{COM}-{MRC}: A {CO}ntext-Masked Machine Reading Comprehension Framework for Aspect Sentiment Triplet Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.212/
Zhai, Zepeng and Chen, Hao and Feng, Fangxiang and Li, Ruifan and Wang, Xiaojie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3230--3241
Aspect Sentiment Triplet Extraction (ASTE) aims to extract sentiment triplets from sentences, which was recently formalized as an effective machine reading comprehension (MRC) based framework. However, when facing multiple aspect terms, the MRC-based methods could fail due to the interference from other aspect terms. In this paper, we propose a novel \textit{COntext-Masked MRC} (COM-MRC) framework for ASTE. Our COM-MRC framework comprises three closely-related components: a context augmentation strategy, a discriminative model, and an inference method. Specifically, a context augmentation strategy is designed by enumerating all masked contexts for each aspect term. The discriminative model comprises four modules, i.e., aspect and opinion extraction modules, sentiment classification and aspect detection modules. In addition, a two-stage inference method first extracts all aspects and then identifies their opinions and sentiment through iteratively masking the aspects. Extensive experimental results on benchmark datasets show the effectiveness of our proposed COM-MRC framework, which outperforms state-of-the-art methods consistently.
null
null
10.18653/v1/2022.emnlp-main.212
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,342
inproceedings
zhong-etal-2022-cem
{CEM}: Machine-Human Chatting Handoff via Causal-Enhance Module
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.213/
Zhong, Shanshan and Qin, Jinghui and Huang, Zhongzhan and Li, Daifeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3242--3253
Aiming to ensure chatbot quality by predicting chatbot failure and enabling human-agent collaboration, Machine-Human Chatting Handoff (MHCH) has attracted lots of attention from both industry and academia in recent years. However, most existing methods mainly focus on the dialogue context or assist with global satisfaction prediction based on multi-task learning, which ignore the grounded relationships among the causal variables, like the user state and labor cost. These variables are significantly associated with handoff decisions, resulting in prediction bias and cost increasement. Therefore, we propose Causal-Enhance Module (CEM) by establishing the causal graph of MHCH based on these two variables, which is a simple yet effective module and can be easy to plug into the existing MHCH methods. For the impact of users, we use the user state to correct the prediction bias according to the causal relationship of multi-task. For the labor cost, we train an auxiliary cost simulator to calculate unbiased labor cost through counterfactual learning so that a model becomes cost-aware.Extensive experiments conducted on four real-world benchmarks demonstrate the effectiveness of CEM in generally improving the performance of existing MHCH methods without any elaborated model crafting.
null
null
10.18653/v1/2022.emnlp-main.213
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,343
inproceedings
shi-etal-2022-nearest
Nearest Neighbor Zero-Shot Inference
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.214/
Shi, Weijia and Michael, Julian and Gururangan, Suchin and Zettlemoyer, Luke
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3254--3265
Retrieval-augmented language models (LMs) use non-parametric memory to substantially outperform their non-retrieval counterparts on perplexity-based evaluations, but it is an open question whether they achieve similar gains in few- and zero-shot end-task accuracy. We extensively study one such model, the k-nearest neighbor LM (kNN-LM), showing that the gains marginally transfer. The main challenge is to achieve coverage of the verbalizer tokens that define the different end-task class labels. To address this challenge, we also introduce kNN-Prompt, a simple and effective kNN-LM with automatically expanded fuzzy verbalizers (e.g. to expand {\textquotedblleft}terrible{\textquotedblright} to also include {\textquotedblleft}silly{\textquotedblright} and other task-specific synonyms for sentiment classification). Across nine diverse end-tasks, using kNN-Prompt with GPT-2 large yields significant performance boosts over strong zeroshot baselines (13.4{\%} absolute improvement over the base LM on average). We also show that other advantages of non-parametric augmentation hold for end tasks; kNN-Prompt is effective for domain adaptation with no further training, and gains increase with the size of the retrieval model.
null
null
10.18653/v1/2022.emnlp-main.214
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,344
inproceedings
gros-etal-2022-robots
Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in Dialog Systems
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.215/
Gros, David and Li, Yu and Yu, Zhou
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3266--3284
Dialog systems are often designed or trained to output human-like responses. However, some responses may be impossible for a machine to truthfully say (e.g. {\textquotedblleft}that movie made me cry{\textquotedblright}). Highly anthropomorphic responses might make users uncomfortable or implicitly deceive them into thinking they are interacting with a human. We collect human ratings on the feasibility of approximately 900 two-turn dialogs sampled from 9 diverse data sources. Ratings are for two hypothetical machine embodiments: a futuristic humanoid robot and a digital assistant. We find that for some data-sources commonly used to train dialog systems, 20-30{\%} of utterances are not viewed as possible for a machine. Rating is marginally affected by machine embodiment. We explore qualitative and quantitative reasons for these ratings. Finally, we build classifiers and explore how modeling configuration might affect output permissibly, and discuss implications for building less falsely anthropomorphic dialog systems.
null
null
10.18653/v1/2022.emnlp-main.215
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,345
inproceedings
li-etal-2022-joint
A Joint Learning Framework for Restaurant Survival Prediction and Explanation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.216/
Li, Xin and Zhang, Xiaojie and JiaHao, Peng and Mao, Rui and Zhou, Mingyang and Xie, Xing and Liao, Hao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3285--3297
The bloom of the Internet and the recent breakthroughs in deep learning techniques open a new door to AI for E-commence, with a trend of evolving from using a few financial factors such as liquidity and profitability to using more advanced AI techniques to process complex and multi-modal data. In this paper, we tackle the practical problem of restaurant survival prediction. We argue that traditional methods ignore two essential respects, which are very helpful for the task: 1) modeling customer reviews and 2) jointly considering status prediction and result explanation. Thus, we propose a novel joint learning framework for explainable restaurant survival prediction based on the multi-modal data of user-restaurant interactions and users' textual reviews. Moreover, we design a graph neural network to capture the high-order interactions and design a co-attention mechanism to capture the most informative and meaningful signal from noisy textual reviews. Our results on two datasets show a significant and consistent improvement over the SOTA techniques (average 6.8{\%} improvement in prediction and 45.3{\%} improvement in explanation).
null
null
10.18653/v1/2022.emnlp-main.216
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,346
inproceedings
zhang-etal-2022-making
Making Pretrained Language Models Good Long-tailed Learners
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.217/
Zhang, Chen and Ren, Lei and Wang, Jingang and Wu, Wei and Song, Dawei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3298--3312
Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability in effectively exploiting pre-trained knowledge. This motivates us to check the hypothesis that prompt-tuning is also a promising choice for long-tailed classification, since the tail classes are intuitively few-shot ones. To achieve this aim, we conduct empirical studies to examine the hypothesis. The results demonstrate that prompt-tuning makes pretrained language models at least good long-tailed learners. For intuitions on why prompt-tuning can achieve good performance in long-tailed classification, we carry out in-depth analyses by progressively bridging the gap between prompt-tuning and commonly used finetuning. The summary is that the classifier structure and parameterization form the key to making good long-tailed learners, in comparison with the less important input structure. Finally, we verify the applicability of our finding to few-shot classification.
null
null
10.18653/v1/2022.emnlp-main.217
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,347
inproceedings
chen-etal-2022-unigeo
{U}ni{G}eo: Unifying Geometry Logical Reasoning via Reformulating Mathematical Expression
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.218/
Chen, Jiaqi and Li, Tong and Qin, Jinghui and Lu, Pan and Lin, Liang and Chen, Chongyu and Liang, Xiaodan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3313--3323
Geometry problem solving is a well-recognized testbed for evaluating the high-level multi-modal reasoning capability of deep models. In most existing works, two main geometry problems: calculation and proving, are usually treated as two specific tasks, hindering a deep model to unify its reasoning capability on multiple math tasks. However, in essence, these two tasks have similar problem representations and overlapped math knowledge which can improve the understanding and reasoning ability of a deep model on both two tasks. Therefore, we construct a large-scale Unified Geometry problem benchmark, UniGeo, which contains 4,998 calculation problems and 9,543 proving problems. Each proving problem is annotated with a multi-step proof with reasons and mathematical expressions. The proof can be easily reformulated as a proving sequence that shares the same formats with the annotated program sequence for calculation problems. Naturally, we also present a unified multi-task Geometric Transformer framework, Geoformer, to tackle calculation and proving problems simultaneously in the form of sequence generation, which finally shows the reasoning ability can be improved on both two tasks by unifying formulation. Furthermore, we propose a Mathematical Expression Pretraining (MEP) method that aims to predict the mathematical expressions in the problem solution, thus improving the Geoformer model. Experiments on the UniGeo demonstrate that our proposed Geoformer obtains state-of-the-art performance by outperforming task-specific model NGS with over 5.6{\%} and 3.2{\%} accuracies on calculation and proving problems, respectively.
null
null
10.18653/v1/2022.emnlp-main.218
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,348
inproceedings
yang-etal-2022-face
Face-Sensitive Image-to-Emotional-Text Cross-modal Translation for Multimodal Aspect-based Sentiment Analysis
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.219/
Yang, Hao and Zhao, Yanyan and Qin, Bing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3324--3335
Aspect-level multimodal sentiment analysis, which aims to identify the sentiment of the target aspect from multimodal data, recently has attracted extensive attention in the community of multimedia and natural language processing. Despite the recent success in textual aspect-based sentiment analysis, existing models mainly focused on utilizing the object-level semantic information in the image but ignore explicitly using the visual emotional cues, especially the facial emotions. How to distill visual emotional cues and align them with the textual content remains a key challenge to solve the problem. In this work, we introduce a face-sensitive image-to-emotional-text translation (FITE) method, which focuses on capturing visual sentiment cues through facial expressions and selectively matching and fusing with the target aspect in textual modality. To the best of our knowledge, we are the first that explicitly utilize the emotional information from images in the multimodal aspect-based sentiment analysis task. Experiment results show that our method achieves state-of-the-art results on the Twitter-2015 and Twitter-2017 datasets. The improvement demonstrates the superiority of our model in capturing aspect-level sentiment in multimodal data with facial expressions.
null
null
10.18653/v1/2022.emnlp-main.219
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,349
inproceedings
zhang-etal-2022-fined
{F}ine{D}-Eval: Fine-grained Automatic Dialogue-Level Evaluation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.220/
Zhang, Chen and D{'}Haro, Luis Fernando and Zhang, Qiquan and Friedrichs, Thomas and Li, Haizhou
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3336--3355
Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment. However, they either perform turn-level evaluation or look at a single dialogue quality dimension. One would expect a good evaluation metric to assess multiple quality dimensions at the dialogue level. To this end, we are motivated to propose a multi-dimensional dialogue-level metric, which consists of three sub-metrics with each targeting a specific dimension. The sub-metrics are trained with novel self-supervised objectives and exhibit strong correlations with human judgment for their respective dimensions. Moreover, we explore two approaches to combine the sub-metrics: metric ensemble and multitask learning. Both approaches yield a holistic metric that significantly outperforms individual sub-metrics. Compared to the existing state-of-the-art metric, the combined metrics achieve around 16{\%} relative improvement on average across three high-quality dialogue-level evaluation benchmarks.
null
null
10.18653/v1/2022.emnlp-main.220
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,350
inproceedings
wu-zhao-2022-sentence
Sentence Representation Learning with Generative Objective rather than Contrastive Objective
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.221/
Wu, Bohong and Zhao, Hai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3356--3368
Though offering amazing contextualized token-level representations, current pre-trained language models take less attention on accurately acquiring sentence-level representation during their self-supervised pre-training. However, contrastive objectives which dominate the current sentence representation learning bring little linguistic interpretability and no performance guarantee on downstream semantic tasks. We instead propose a novel generative self-supervised learning objective based on phrase reconstruction. To overcome the drawbacks of previous generative methods, we carefully model intra-sentence structure by breaking down one sentence into pieces of important phrases. Empirical studies show that our generative learning achieves powerful enough performance improvement and outperforms the current state-of-the-art contrastive methods not only on the STS benchmarks, but also on downstream semantic retrieval and reranking tasks. Our code is available at https://github.com/chengzhipanpan/PaSeR.
null
null
10.18653/v1/2022.emnlp-main.221
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,351
inproceedings
deng-etal-2022-rlprompt
{RLP}rompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.222/
Deng, Mingkai and Wang, Jianyu and Hsieh, Cheng-Ping and Wang, Yihan and Guo, Han and Shu, Tianmin and Song, Meng and Xing, Eric and Hu, Zhiting
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3369--3391
Prompting has shown impressive success in enabling large pre-trained language models (LMs) to perform diverse NLP tasks, especially with only few downstream data. Automatically finding the optimal prompt for each task, however, is challenging. Most existing work resorts to tuning *soft* prompts (e.g., embeddings) which fall short of interpretability, reusability across LMs, and applicability when gradients are not accessible. *Discrete* prompts, on the other hand, are difficult to optimize, and are often created by {\textquotedblleft}enumeration (e.g., paraphrasing)-then-selection{\textquotedblright} heuristics that do not explore the prompt space systematically. This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL). RLPrompt formulates a parameter-efficient policy network that generates the optimized discrete prompt after training with reward. To harness the complex and stochastic reward signals from the large LM environment, we incorporate effective reward stabilization that substantially enhances training efficiency. RLPrompt is flexibly applicable to different types of LMs, such as masked (e.g., BERT) and left-to-right models (e.g., GPTs), for both classification and generation tasks. Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing fine-tuning or prompting methods. Interestingly, the resulting optimized prompts are often ungrammatical gibberish text; and surprisingly, those gibberish prompts are transferrable between different LMs to retain significant performance, indicating that LM prompting may not follow human language patterns.
null
null
10.18653/v1/2022.emnlp-main.222
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,352
inproceedings
zhang-song-2022-discup
{D}is{C}up: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.223/
Zhang, Hanqing and Song, Dawei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3392--3406
Prompt learning with immensely large Casual Language Models (CLMs) has been shown promising for attribute-controllable text generation (CTG). However, vanilla prompt tuning tends to imitate training corpus characteristics beyond the control attributes, resulting in a poor generalization ability. Moreover, it is less able to capture the relationship between different attributes, further limiting the control performance. In this paper, we propose a new CTG approach, namely DisCup, which incorporates the attribute knowledge of discriminator to optimize the control-prompts, steering a frozen CLM to produce attribute-specific texts. Specifically, the frozen CLM model, capable of producing multitudinous texts, is first used to generate the next-token candidates based on the context, so as to ensure the diversity of tokens to be predicted. Then, we leverage an attribute-discriminator to select desired/undesired tokens from those candidates, providing the inter-attribute knowledge. Finally, we bridge the above two traits by an unlikelihood objective for prompt-tuning. Extensive experimental results show that DisCup can achieve a new state-of-the-art control performance while maintaining an efficient and high-quality text generation, only relying on around 10 virtual tokens.
null
null
10.18653/v1/2022.emnlp-main.223
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,353
inproceedings
he-etal-2022-cpl
{CPL}: Counterfactual Prompt Learning for Vision and Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.224/
He, Xuehai and Yang, Diji and Feng, Weixi and Fu, Tsu-Jui and Akula, Arjun and Jampani, Varun and Narayana, Pradyumna and Basu, Sugato and Wang, William Yang and Wang, Xin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3407--3418
Prompt tuning is a new few-shot transfer learning technique that only tunes the learnable prompt for pre-trained vision and language models such as CLIP. However, existing prompt tuning methods tend to learn spurious or entangled representations, which leads to poor generalization to unseen concepts.Towards non-spurious and efficient prompt learning from limited examples, this paper presents a novel Counterfactual Prompt Learning (CPL) method for vision and language models, which simultaneously employs counterfactual generation and contrastive learning in a joint optimization framework.Particularly, CPL constructs counterfactual by identifying minimal non-spurious feature change between semantically-similar positive and negative samples that causes concept change, and learns more generalizable prompt representation from both factual and counterfactual examples via contrastive learning. Extensive experiments demonstrate that CPL can obtain superior few-shot performance on different vision and language tasks than previous prompt tuning methods on CLIP. On image classification, we achieve 3.55{\%} average relative improvement on unseen classes across seven datasets; on image-text retrieval and visual question answering, we gain up to 4.09{\%} and 25.08{\%} relative improvements across three few-shot scenarios on unseen test sets respectively.
null
null
10.18653/v1/2022.emnlp-main.224
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,354
inproceedings
perez-etal-2022-red
Red Teaming Language Models with Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.225/
Perez, Ethan and Huang, Saffron and Song, Francis and Cai, Trevor and Ring, Roman and Aslanides, John and Glaese, Amelia and McAleese, Nat and Irving, Geoffrey
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3419--3448
Language Models (LMs) often cannot be deployed because of their potential to harm users in hard-to-predict ways. Prior work identifies harmful behaviors before deployment by using human annotators to hand-write test cases. However, human annotation is expensive, limiting the number and diversity of test cases. In this work, we automatically find cases where a target LM behaves in a harmful way, by generating test cases ({\textquotedblleft}red teaming{\textquotedblright}) using another LM. We evaluate the target LM`s replies to generated test questions using a classifier trained to detect offensive content, uncovering tens of thousands of offensive replies in a 280B parameter LM chatbot. We explore several methods, from zero-shot generation to reinforcement learning, for generating test cases with varying levels of diversity and difficulty. Furthermore, we use prompt engineering to control LM-generated test cases to uncover a variety of other harms, automatically finding groups of people that the chatbot discusses in offensive ways, personal and hospital phone numbers generated as the chatbot`s own contact info, leakage of private training data in generated text, and harms that occur over the course of a conversation. Overall, LM-based red teaming is one promising tool (among many needed) for finding and fixing diverse, undesirable LM behaviors before impacting users.
null
null
10.18653/v1/2022.emnlp-main.225
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,355
inproceedings
gao-etal-2022-caponimage
{C}ap{O}n{I}mage: Context-driven Dense-Captioning on Image
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.226/
Gao, Yiqi and Hou, Xinglin and Zhang, Yuanmeng and Ge, Tiezheng and Jiang, Yuning and Wang, Peng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3449--3465
Existing image captioning systems are dedicated to generating narrative captions for images, which are spatially detached from theimage in presentation. However, texts can also be used as decorations on the image to highlight the key points and increase theattractiveness of images. In this work, we introduce a new taskcalled captioning on image (CapOnImage), which aims to generatedense captions at different locations of the image based on contextual information. To fully exploit the surrounding visual context togenerate the most suitable caption for each location, we propose amulti-modal pre-training model with multi-level pre-training tasksthat progressively learn the correspondence between texts and image locations from easy to difficult. Since the model may generateredundant captions for nearby locations, we further enhance thelocation embedding with neighbor locations as context. For thisnew task, we also introduce a large-scale benchmark called CapOnImage2M, which contains 2.1 million product images, each with anaverage of 4.8 spatially localized captions. Compared with other image captioning model variants, our model achieves the best resultsin both captioning accuracy and diversity aspects.
null
null
10.18653/v1/2022.emnlp-main.226
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,356
inproceedings
wang-etal-2022-spanproto
{S}pan{P}roto: A Two-stage Span-based Prototypical Network for Few-shot Named Entity Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.227/
Wang, Jianing and Wang, Chengyu and Tan, Chuanqi and Qiu, Minghui and Huang, Songfang and Huang, Jun and Gao, Ming
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3466--3476
Few-shot Named Entity Recognition (NER) aims to identify named entities with very little annotated data. Previous methods solve this problem based on token-wise classification, which ignores the information of entity boundaries, and inevitably the performance is affected by the massive non-entity tokens. To this end, we propose a seminal span-based prototypical network (SpanProto) that tackles few-shot NER via a two-stage approach, including span extraction and mention classification. In the span extraction stage, we transform the sequential tags into a global boundary matrix, enabling the model to focus on the explicit boundary information. For mention classification, we leverage prototypical learning to capture the semantic representations for each labeled span and make the model better adapt to novel-class entities. To further improve the model performance, we split out the false positives generated by the span extractor but not labeled in the current episode set, and then present a margin-based loss to separate them from each prototype region. Experiments over multiple benchmarks demonstrate that our model outperforms strong baselines by a large margin.
null
null
10.18653/v1/2022.emnlp-main.227
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,357
inproceedings
lucy-etal-2022-discovering
Discovering Differences in the Representation of People using Contextualized Semantic Axes
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.228/
Lucy, Li and Tadimeti, Divya and Bamman, David
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3477--3494
A common paradigm for identifying semantic differences across social and temporal contexts is the use of static word embeddings and their distances. In particular, past work has compared embeddings against {\textquotedblleft}semantic axes{\textquotedblright} that represent two opposing concepts. We extend this paradigm to BERT embeddings, and construct contextualized axes that mitigate the pitfall where antonyms have neighboring representations. We validate and demonstrate these axes on two people-centric datasets: occupations from Wikipedia, and multi-platform discussions in extremist, men`s communities over fourteen years. In both studies, contextualized semantic axes can characterize differences among instances of the same word type. In the latter study, we show that references to women and the contexts around them have become more detestable over time.
null
null
10.18653/v1/2022.emnlp-main.228
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,358
inproceedings
chen-etal-2022-generating
Generating Literal and Implied Subquestions to Fact-check Complex Claims
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.229/
Chen, Jifan and Sriram, Aniruddh and Choi, Eunsol and Durrett, Greg
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3495--3516
Verifying political claims is a challenging task, as politicians can use various tactics to subtly misrepresent the facts for their agenda. Existing automatic fact-checking systems fall short here, and their predictions like {\textquotedblleft}half-true{\textquotedblright} are not very useful in isolation, since it is unclear which parts of a claim are true and which are not. In this work, we focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim. We present CLAIMDECOMP, a dataset of decompositions for over 1000 claims. Given a claim and its verification paragraph written by fact-checkers, our trained annotators write subquestions covering both explicit propositions of the original claim and its implicit facets, such as asking about additional political context that changes our view of the claim`s veracity. We study whether state-of-the-art models can generate such subquestions, showing that these models generate reasonable questions to ask, but predicting the comprehensive set of subquestions from the original claim without evidence remains challenging. We further show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers, suggesting that they can be useful pieces of a fact-checking pipeline.
null
null
10.18653/v1/2022.emnlp-main.229
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,359
inproceedings
bremerman-etal-2022-machine
Machine Translation Robustness to Natural Asemantic Variation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.230/
Bremerman, Jacob and Ren, Xiang and May, Jonathan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3517--3532
Current Machine Translation (MT) models still struggle with more challenging input, such as noisy data and tail-end words and phrases. Several works have addressed this robustness issue by identifying specific categories of noise and variation then tuning models to perform better on them. An important yet under-studied category involves minor variations in nuance (non-typos) that preserve meaning w.r.t. the target language. We introduce and formalize this category as Natural Asemantic Variation (NAV) and investigate it in the context of MT robustness. We find that existing MT models fail when presented with NAV data, but we demonstrate strategies to improve performance on NAV by fine-tuning them with human-generated variations. We also show that NAV robustness can be transferred across languages and find that synthetic perturbations can achieve some but not all of the benefits of organic NAV data.
null
null
10.18653/v1/2022.emnlp-main.230
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,360
inproceedings
shi-etal-2022-natural
Natural Language to Code Translation with Execution
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.231/
Shi, Freda and Fried, Daniel and Ghazvininejad, Marjan and Zettlemoyer, Luke and Wang, Sida I.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3533--3546
Generative models of code, pretrained on large corpora of programs, have shown great success in translating natural language to code (Chen et al., 2021; Austin et al., 2021; Li et al., 2022, inter alia). While these models do not explicitly incorporate program semantics (i.e., execution results) during training, they are able to generate correct solutions for many problems. However, choosing a single correct program from a generated set for each problem remains challenging. In this work, we introduce execution result{--}based minimum Bayes risk decoding (MBR-EXEC) for program selection and show that it improves the few-shot performance of pretrained code models on natural-language-to-code tasks. We select output programs from a generated candidate set by marginalizing over program implementations that share the same semantics. Because exact equivalence is intractable, we execute each program on a small number of test inputs to approximate semantic equivalence. Across datasets, execution or simulated execution significantly outperforms the methods that do not involve program semantics. We find that MBR-EXEC consistently improves over all execution-unaware selection methods, suggesting it as an effective approach for natural language to code translation.
null
null
10.18653/v1/2022.emnlp-main.231
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,361
inproceedings
sultan-shahaf-2022-life
Life is a Circus and We are the Clowns: Automatically Finding Analogies between Situations and Processes
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.232/
Sultan, Oren and Shahaf, Dafna
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3547--3562
Analogy-making gives rise to reasoning, abstraction, flexible categorization and counterfactual inference {--} abilities lacking in even the best AI systems today. Much research has suggested that analogies are key to non-brittle systems that can adapt to new domains. Despite their importance, analogies received little attention in the NLP community, with most research focusing on simple word analogies. Work that tackled more complex analogies relied heavily on manually constructed, hard-to-scale input representations.In this work, we explore a more realistic, challenging setup: our input is a pair of natural language procedural texts, describing a situation or a process (e.g., how the heart works/how a pump works). Our goal is to automatically extract entities and their relations from the text and find a mapping between the different domains based on relational similarity (e.g., blood is mapped to water). We develop an interpretable, scalable algorithm and demonstrate that it identifies the correct mappings 87{\%} of the time for procedural texts and 94{\%} for stories from cognitive-psychology literature. We show it can extract analogies from a large dataset of procedural texts, achieving 79{\%} precision (analogy prevalence in data: 3{\%}). Lastly, we demonstrate that our algorithm is robust to paraphrasing the input texts
null
null
10.18653/v1/2022.emnlp-main.232
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,362
inproceedings
blevins-zettlemoyer-2022-language
Language Contamination Helps Explains the Cross-lingual Capabilities of {E}nglish Pretrained Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.233/
Blevins, Terra and Zettlemoyer, Luke
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3563--3574
English pretrained language models, which make up the backbone of many modern NLP systems, require huge amounts of unlabeled training data. These models are generally presented as being trained only on English text but have been found to transfer surprisingly well to other languages. We investigate this phenomenon and find that common English pretraining corpora actually contain significant amounts of non-English text: even when less than 1{\%} of data is not English (well within the error rate of strong language classifiers), this leads to hundreds of millions of foreign language tokens in large-scale datasets. We then demonstrate that even these small percentages of non-English data facilitate cross-lingual transfer for models trained on them, with target language performance strongly correlated to the amount of in-language data seen during pretraining. In light of these findings, we argue that no model is truly monolingual when pretrained at scale, which should be considered when evaluating cross-lingual transfer.
null
null
10.18653/v1/2022.emnlp-main.233
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,363
inproceedings
blevins-etal-2022-analyzing
Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of Multilingual Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.234/
Blevins, Terra and Gonen, Hila and Zettlemoyer, Luke
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3575--3590
The emergent cross-lingual transfer seen in multilingual pretrained models has sparked significant interest in studying their behavior. However, because these analyses have focused on fully trained multilingual models, little is known about the dynamics of the multilingual pretraining process. We investigate when these models acquire their in-language and cross-lingual abilities by probing checkpoints taken from throughout XLM-R pretraining, using a suite of linguistic tasks. Our analysis shows that the model achieves high in-language performance early on, with lower-level linguistic skills acquired before more complex ones. In contrast, the point in pretraining when the model learns to transfer cross-lingually differs across language pairs. Interestingly, we also observe that, across many languages and tasks, the final model layer exhibits significant performance degradation over time, while linguistic knowledge propagates to lower layers of the network. Taken together, these insights highlight the complexity of multilingual pretraining and the resulting varied behavior for different languages over time.
null
null
10.18653/v1/2022.emnlp-main.234
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,364
inproceedings
cheng-etal-2022-neural
Neural Machine Translation with Contrastive Translation Memories
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.235/
Cheng, Xin and Gao, Shen and Liu, Lemao and Zhao, Dongyan and Yan, Rui
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3591--3601
Retrieval-augmented Neural Machine Translation models have been successful in many translation scenarios. Different from previous works that make use of mutually similar but redundant translation memories (TMs), we propose a new retrieval-augmented NMT to model contrastively retrieved translation memories that are holistically similar to the source sentence while individually contrastive to each other providing maximal information gain in three phases. First, in TM retrieval phase, we adopt contrastive retrieval algorithm to avoid redundancy and uninformativeness of similar translation pieces. Second, in memory encoding stage, given a set of TMs we propose a novel Hierarchical Group Attention module to gather both local context of each TM and global context of the whole TM set. Finally, in training phase, a Multi-TM contrastive learning objective is introduced to learn salient feature of each TM with respect to target sentence. Experimental results show that our framework obtains substantial improvements over strong baselines in the benchmark dataset.
null
null
10.18653/v1/2022.emnlp-main.235
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,365
inproceedings
zheng-etal-2022-distilling
Distilling Causal Effect from Miscellaneous Other-Class for Continual Named Entity Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.236/
Zheng, Junhao and Liang, Zhanxian and Chen, Haibin and Ma, Qianli
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3602--3615
Continual Learning for Named Entity Recognition (CL-NER) aims to learn a growing number of entity types over time from a stream of data. However, simply learning Other-Class in the same way as new entity types amplifies the catastrophic forgetting and leads to a substantial performance drop. The main cause behind this is that Other-Class samples usually contain old entity types, and the old knowledge in these Other-Class samples is not preserved properly. Thanks to the causal inference, we identify that the forgetting is caused by the missing causal effect from the old data.To this end, we propose a unified causal framework to retrieve the causality from both new entity types and Other-Class.Furthermore, we apply curriculum learning to mitigate the impact of label noise and introduce a self-adaptive weight for balancing the causal effects between new entity types and Other-Class. Experimental results on three benchmark datasets show that our method outperforms the state-of-the-art method by a large margin. Moreover, our method can be combined with the existing state-of-the-art methods to improve the performance in CL-NER.
null
null
10.18653/v1/2022.emnlp-main.236
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,366
inproceedings
li-etal-2022-exploring-secrets
Exploring the Secrets Behind the Learning Difficulty of Meaning Representations for Semantic Parsing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.237/
Li, Zhenwen and Guo, Jiaqi and Liu, Qian and Lou, Jian-Guang and Xie, Tao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3616--3625
Previous research has shown that the design of Meaning Representation (MR) greatly influences the final model performance of a neural semantic parser. Therefore, designing a good MR is a long-term goal for semantic parsing. However, it is still an art as there is no quantitative indicator that can tell us which MR among a set of candidates may have the best final model performance. In practice, in order toselect an MR design, researchers often have to go through the whole training-testing process for all design candidates, and the process often costs a lot. In this paper, we propose a data-aware metric called ISS (denoting incremental structural stability) of MRs, and demonstrate that ISS is highly correlated with the final performance. The finding shows that ISS can be used as an indicator for MR design to avoid the costly training-testing process.
null
null
10.18653/v1/2022.emnlp-main.237
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,367
inproceedings
mcinerney-etal-2022-thats
That`s the Wrong Lung! Evaluating and Improving the Interpretability of Unsupervised Multimodal Encoders for Medical Data
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.238/
McInerney, Jered and Young, Geoffrey and van de Meent, Jan-Willem and Wallace, Byron
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3626--3648
Pretraining multimodal models on Electronic Health Records (EHRs) provides a means of learning representations that can transfer to downstream tasks with minimal supervision. Recent multimodal models induce soft local alignments between image regions and sentences. This is of particular interest in the medical domain, where alignments might highlight regions in an image relevant to specific phenomena described in free-text. While past work has suggested that attention {\textquotedblleft}heatmaps{\textquotedblright} can be interpreted in this manner, there has been little evaluation of such alignments. We compare alignments from a state-of-the-art multimodal (image and text) model for EHR with human annotations that link image regions to sentences. Our main finding is that the text has an often weak or unintuitive influence on attention; alignments do not consistently reflect basic anatomical information. Moreover, synthetic modifications {---} such as substituting {\textquotedblleft}left{\textquotedblright} for {\textquotedblleft}right{\textquotedblright} {---} do not substantially influence highlights. Simple techniques such as allowing the model to opt out of attending to the image and few-shot finetuning show promise in terms of their ability to improve alignments with very little or no supervision. We make our code and checkpoints open-source.
null
null
10.18653/v1/2022.emnlp-main.238
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,368
inproceedings
kolonin-ramesh-2022-unsupervised
Unsupervised Tokenization Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.239/
Kolonin, Anton and Ramesh, Vignav
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3649--3664
In the presented study, we discover that the so-called {\textquotedblleft}transition freedom{\textquotedblright} metric appears superior for unsupervised tokenization purposes in comparison to statistical metrics such as mutual information and conditional probability, providing F-measure scores in range from 0.71 to 1.0 across explored multilingual corpora. We find that different languages require different offshoots of that metric (such as derivative, variance, and {\textquotedblleft}peak values{\textquotedblright}) for successful tokenization. Larger training corpora do not necessarily result in better tokenization quality, while compressing the models by eliminating statistically weak evidence tends to improve performance. The proposed unsupervised tokenization technique provides quality better than or comparable to lexicon-based ones, depending on the language.
null
null
10.18653/v1/2022.emnlp-main.239
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,369
inproceedings
wang-etal-2022-template
A Template-based Method for Constrained Neural Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.240/
Wang, Shuo and Li, Peng and Tan, Zhixing and Tu, Zhaopeng and Sun, Maosong and Liu, Yang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3665--3679
Machine translation systems are expected to cope with various types of constraints in many practical scenarios. While neural machine translation (NMT) has achieved strong performance in unconstrained cases, it is non-trivial to impose pre-specified constraints into the translation process of NMT models. Although many approaches have been proposed to address this issue, most existing methods can not satisfy the following three desiderata at the same time: (1) high translation quality, (2) high match accuracy, and (3) low latency. In this work, we propose a template-based method that can yield results with high translation quality and match accuracy and the inference speed of our method is comparable with unconstrained NMT models. Our basic idea is to rearrange the generation of constrained and unconstrained tokens through a template. Our method does not require any changes in the model architecture and the decoding algorithm. Experimental results show that the proposed template-based approach can outperform several representative baselines in both lexically and structurally constrained translation tasks.
null
null
10.18653/v1/2022.emnlp-main.240
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,370
inproceedings
zhang-etal-2022-pats
{PATS}: Sensitivity-aware Noisy Learning for Pretrained Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.241/
Zhang, Yupeng and Zhang, Hongzhi and Wang, Sirui and Wu, Wei and Li, Zhoujun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3680--3687
A wide range of NLP tasks benefit from the fine-tuning of pretrained language models (PLMs). However, a number of redundant parameters which contribute less to the downstream task are observed in a directly fine-tuned model. We consider the gap between pretraining and downstream tasks hinders the training of these redundant parameters, and results in a suboptimal performance of the overall model. In this paper, we present PATS (Perturbation According To Sensitivity), a noisy training mechanism which considers each parameter`s importance in the downstream task to help fine-tune PLMs. The main idea of PATS is to add bigger noise to parameters with lower sensitivity and vice versa, in order to activate more parameters' contributions to downstream tasks without affecting the sensitive ones much. Extensive experiments conducted on different tasks of the GLUE benchmark show PATS can consistently empower the fine-tuning of different sizes of PLMs, and the parameters in the well-performing models always have more concentrated distributions of sensitivities, which experimentally proves the effectiveness of our method.
null
null
10.18653/v1/2022.emnlp-main.241
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,371
inproceedings
lim-lauw-2022-towards
Towards Reinterpreting Neural Topic Models via Composite Activations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.242/
Lim, Jia Peng and Lauw, Hady
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3688--3703
Most Neural Topic Models (NTM) use a variational auto-encoder framework producing K topics limited to the size of the encoder`s output. These topics are interpreted through the selection of the top activated words via the weights or reconstructed vector of the decoder that are directly connected to each neuron. In this paper, we present a model-free two-stage process to reinterpret NTM and derive further insights on the state of the trained model. Firstly, building on the original information from a trained NTM, we generate a pool of potential candidate {\textquotedblleft}composite topics{\textquotedblright} by exploiting possible co-occurrences within the original set of topics, which decouples the strict interpretation of topics from the original NTM. This is followed by a combinatorial formulation to select a final set of composite topics, which we evaluate for coherence and diversity on a large external corpus. Lastly, we employ a user study to derive further insights on the reinterpretation process.
null
null
10.18653/v1/2022.emnlp-main.242
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,372
inproceedings
yuan-etal-2022-shot
Few-shot Query-Focused Summarization with Prefix-Merging
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.243/
Yuan, Ruifeng and Wang, Zili and Cao, Ziqiang and Li, Wenjie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3704--3714
Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text summarization, query-focused summarization has long been plagued by the problem of lacking high-quality large-scale datasets. In this paper, we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few-shot learning in query-focused summarization. Here, we propose prefix-merging, a prefix-based pretraining strategy for few-shot learning in query-focused summarization. Drawn inspiration from prefix-tuning, we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query-focused summarization. With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization. We further discuss the influence of different prefix designs and propose a visualized explanation for how prefix-merging works.
null
null
10.18653/v1/2022.emnlp-main.243
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,373
inproceedings
lai-etal-2022-cross
Cross-Align: Modeling Deep Cross-lingual Interactions for Word Alignment
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.244/
Lai, Siyu and Yang, Zhen and Meng, Fandong and Chen, Yufeng and Xu, Jinan and Zhou, Jie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3715--3725
Word alignment which aims to extract lexicon translation equivalents between source and target sentences, serves as a fundamental tool for natural language processing. Recent studies in this area have yielded substantial improvements by generating alignments from contextualized embeddings of the pre-trained multilingual language models. However, we find that the existing approaches capture few interactions between the input sentence pairs, which degrades the word alignment quality severely, especially for the ambiguous words in the monolingual context. To remedy this problem, we propose Cross-Align to model deep interactions between the input sentence pairs, in which the source and target sentences are encoded separately with the shared self-attention modules in the shallow layers, while cross-lingual interactions are explicitly constructed by the cross-attention modules in the upper layers. Besides, to train our model effectively, we propose a two-stage training framework, where the model is trained with a simple Translation Language Modeling (TLM) objective in the first stage and then finetuned with a self-supervised alignment objective in the second stage. Experiments show that the proposed Cross-Align achieves the state-of-the-art (SOTA) performance on four out of five language pairs.
null
null
10.18653/v1/2022.emnlp-main.244
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,374
inproceedings
sun-etal-2022-bertscore
{BERTS}core is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.245/
Sun, Tianxiang and He, Junliang and Qiu, Xipeng and Huang, Xuanjing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3726--3739
Automatic evaluation metrics are crucial to the development of generative systems. In recent years, pre-trained language model (PLM) based metrics, such as BERTScore, have been commonly adopted in various generation tasks. However, it has been demonstrated that PLMs encode a range of stereotypical societal biases, leading to a concern about the fairness of PLMs as metrics. To that end, this work presents the first systematic study on the social bias in PLM-based metrics. We demonstrate that popular PLM-based metrics exhibit significantly higher social bias than traditional metrics on 6 sensitive attributes, namely race, gender, religion, physical appearance, age, and socioeconomic status. In-depth analysis suggests that choosing paradigms (matching, regression, or generation) of the metric has a greater impact on fairness than choosing PLMs. In addition, we develop debiasing adapters that are injected into PLM layers, mitigating bias in PLM-based metrics while retaining high performance for evaluating text generation.
null
null
10.18653/v1/2022.emnlp-main.245
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,375
inproceedings
wang-etal-2022-hpt
{HPT}: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.246/
Wang, Zihan and Wang, Peiyi and Liu, Tianyu and Lin, Binghuai and Cao, Yunbo and Sui, Zhifang and Wang, Houfeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3740--3751
Hierarchical text classification (HTC) is a challenging subtask of multi-label classification due to its complex label hierarchy.Recently, the pretrained language models (PLM)have been widely adopted in HTC through a fine-tuning paradigm. However, in this paradigm, there exists a huge gap between the classification tasks with sophisticated label hierarchy and the masked language model (MLM) pretraining tasks of PLMs and thus the potential of PLMs cannot be fully tapped.To bridge the gap, in this paper, we propose HPT, a Hierarchy-aware Prompt Tuning method to handle HTC from a multi-label MLM perspective.Specifically, we construct a dynamic virtual template and label words that take the form of soft prompts to fuse the label hierarchy knowledge and introduce a zero-bounded multi-label cross-entropy loss to harmonize the objectives of HTC and MLM.Extensive experiments show HPT achieves state-of-the-art performances on 3 popular HTC datasets and is adept at handling the imbalance and low resource situations. Our code is available at https://github.com/wzh9969/HPT.
null
null
10.18653/v1/2022.emnlp-main.246
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,376
inproceedings
sultan-etal-2022-overfit
Not to Overfit or Underfit the Source Domains? An Empirical Study of Domain Generalization in Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.247/
Sultan, Md Arafat and Sil, Avi and Florian, Radu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3752--3761
Machine learning models are prone to overfitting their training (source) domains, which is commonly believed to be the reason why they falter in novel target domains. Here we examine the contrasting view that multi-source domain generalization (DG) is first and foremost a problem of mitigating source domain underfitting: models not adequately learning the signal already present in their multi-domain training data. Experiments on a reading comprehension DG benchmark show that as a model learns its source domains better{---}using familiar methods such as knowledge distillation (KD) from a bigger model{---}its zero-shot out-of-domain utility improves at an even faster pace. Improved source domain learning also demonstrates superior out-of-domain generalization over three popular existing DG approaches that aim to limit overfitting. Our implementation of KD-based domain generalization is available via PrimeQA at: https://ibm.biz/domain-generalization-with-kd.
null
null
10.18653/v1/2022.emnlp-main.247
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,377
inproceedings
sap-etal-2022-neural
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large {LM}s
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.248/
Sap, Maarten and Le Bras, Ronan and Fried, Daniel and Choi, Yejin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3762--3780
Social intelligence and Theory of Mind (TOM), i.e., the ability to reason about the different mental states, intents, and reactions of all people involved, allows humans to effectively navigate and understand everyday social interactions. As NLP systems are used in increasingly complex social situations, their ability to grasp social dynamics becomes crucial.In this work, we examine the open question of social intelligence and Theory of Mind in modern NLP systems from an empirical and theorybased perspective. We show that one of today`s largest language models (GPT-3; Brown et al., 2020) lacks this kind of social intelligence out-of-the box, using two tasks: SocialIQa (Sap et al., 2019), which measure models' ability to understand intents and reactions of participants of social interactions, and ToMi (Le, Boureau, and Nickel, 2019), which measures whether models can infer mental states and realities of participants of situations.Our results show that models struggle substantially at these Theory of Mind tasks, with well-below-human accuracies of 55{\%} and 60{\%} on SocialIQa and ToMi, respectively. To conclude, we draw on theories from pragmatics to contextualize this shortcoming of large language models, by examining the limitations stemming from their data, neural architecture, and training paradigms. Challenging the prevalent narrative that only scale is needed, we posit that person-centric NLP approaches might be more effective towards neural Theory of Mind.
null
null
10.18653/v1/2022.emnlp-main.248
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,378
inproceedings
sachan-etal-2022-improving
Improving Passage Retrieval with Zero-Shot Question Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.249/
Sachan, Devendra and Lewis, Mike and Joshi, Mandar and Aghajanyan, Armen and Yih, Wen-tau and Pineau, Joelle and Zettlemoyer, Luke
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3781--3797
We propose a simple and effective re-ranking method for improving passage retrieval in open question answering. The re-ranker re-scores retrieved passages with a zero-shot question generation model, which uses a pre-trained language model to compute the probability of the input question conditioned on a retrieved passage. This approach can be applied on top of any retrieval method (e.g. neural or keyword-based), does not require any domain- or task-specific training (and therefore is expected to generalize better to data distribution shifts), and provides rich cross-attention between query and passage (i.e. it must explain every token in the question). When evaluated on a number of open-domain retrieval datasets, our re-ranker improves strong unsupervised retrieval models by 6{\%}-18{\%} absolute and strong supervised models by up to 12{\%} in terms of top-20 passage retrieval accuracy. We also obtain new state-of-the-art results on full open-domain question answering by simply adding the new re-ranker to existing models with no further changes.
null
null
10.18653/v1/2022.emnlp-main.249
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,379
inproceedings
hsu-etal-2022-summarizing
Summarizing Community-based Question-Answer Pairs
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.250/
Hsu, Ting-Yao and Suhara, Yoshi and Wang, Xiaolan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3798--3808
Community-based Question Answering (CQA), which allows users to acquire their desired information, has increasingly become an essential component of online services in various domains such as E-commerce, travel, and dining. However, an overwhelming number of CQA pairs makes it difficult for users without particular intent to find useful information spread over CQA pairs. To help users quickly digest the key information, we propose the novel CQA summarization task that aims to create a concise summary from CQA pairs. To this end, we first design a multi-stage data annotation process and create a benchmark dataset, COQASUM, based on the Amazon QA corpus. We then compare a collection of extractive and abstractive summarization methods and establish a strong baseline approach DedupLED for the CQA summarization task. Our experiment further confirms two key challenges, sentence-type transfer and deduplication removal, towards the CQA summarization task. Our data and code are publicly available.
null
null
10.18653/v1/2022.emnlp-main.250
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,380
inproceedings
stacey-etal-2022-logical
Logical Reasoning with Span-Level Predictions for Interpretable and Robust {NLI} Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.251/
Stacey, Joe and Minervini, Pasquale and Dubossarsky, Haim and Rei, Marek
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3809--3823
Current Natural Language Inference (NLI) models achieve impressive results, sometimes outperforming humans when evaluating on in-distribution test sets. However, as these models are known to learn from annotation artefacts and dataset biases, it is unclear to what extent the models are learning the task of NLI instead of learning from shallow heuristics in their training data.We address this issue by introducing a logical reasoning framework for NLI, creating highly transparent model decisions that are based on logical rules. Unlike prior work, we show that improved interpretability can be achieved without decreasing the predictive accuracy. We almost fully retain performance on SNLI, while also identifying the exact hypothesis spans that are responsible for each model prediction.Using the e-SNLI human explanations, we verify that our model makes sensible decisions at a span level, despite not using any span labels during training. We can further improve model performance and the span-level decisions by using the e-SNLI explanations during training. Finally, our model is more robust in a reduced data setting. When training with only 1,000 examples, out-of-distribution performance improves on the MNLI matched and mismatched validation sets by 13{\%} and 16{\%} relative to the baseline. Training with fewer observations yields further improvements, both in-distribution and out-of-distribution.
null
null
10.18653/v1/2022.emnlp-main.251
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,381
inproceedings
de-kock-vlachos-2022-disagree
How to disagree well: Investigating the dispute tactics used on {W}ikipedia
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.252/
De Kock, Christine and Stafford, Tom and Vlachos, Andreas
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3824--3837
Disagreements are frequently studied from the perspective of either detecting toxicity or analysing argument structure. We propose a framework of dispute tactics which unifies these two perspectives, as well as other dialogue acts which play a role in resolving disputes, such as asking questions and providing clarification. This framework includes a preferential ordering among rebuttal-type tactics, ranging from ad hominem attacks to refuting the central argument. Using this framework, we annotate 213 disagreements (3,865 utterances) from Wikipedia Talk pages. This allows us to investigate research questions around the tactics used in disagreements; for instance, we provide empirical validation of the approach to disagreement recommended by Wikipedia. We develop models for multilabel prediction of dispute tactics in an utterance, achieving the best performance with a transformer-based label powerset model. Adding an auxiliary task to incorporate the ordering of rebuttal tactics further yields a statistically significant increase. Finally, we show that these annotations can be used to provide useful additional signals to improve performance on the task of predicting escalation.
null
null
10.18653/v1/2022.emnlp-main.252
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,382
inproceedings
kim-skiena-2022-chapter
Chapter Ordering in Novels
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.253/
Kim, Allen and Skiena, Steve
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3838--3848
Understanding narrative flow and text coherence in long-form documents (novels) remains an open problem in NLP.To gain insight, we explore the task of chapter ordering, reconstructing the original order of chapters in novel given a random permutation of the text. This can be seen as extending the well-known sentence ordering task to vastly larger documents: our task deals with over 9,000 novels with an average of twenty chapters each, versus standard sentence ordering datasets averaging only 5-8 sentences. We formulate the task of reconstructing order as a constraint solving problem, using minimum feedback arc set and traveling salesman problem optimization criteria, where the weights of the graph are generated based on models for character occurrences and chapter boundary detection, using relational chapter scores derived from RoBERTa. Our best methods yield a Spearman correlation of 0.59 on this novel and challenging task, substantially above baseline.
null
null
10.18653/v1/2022.emnlp-main.253
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,383
inproceedings
liu-etal-2022-open
Open-ended Knowledge Tracing for Computer Science Education
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.254/
Liu, Naiming and Wang, Zichao and Baraniuk, Richard and Lan, Andrew
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3849--3862
In educational applications, knowledge tracing refers to the problem of estimating students' time-varying concept/skill mastery level from their past responses to questions and predicting their future performance.One key limitation of most existing knowledge tracing methods is that they treat student responses to questions as binary-valued, i.e., whether they are correct or incorrect. Response correctness analysis/prediction is straightforward, but it ignores important information regarding mastery, especially for open-ended questions.In contrast, exact student responses can provide much more information.In this paper, we conduct the first exploration int open-ended knowledge tracing (OKT) by studying the new task of predicting students' exact open-ended responses to questions.Our work is grounded in the domain of computer science education with programming questions. We develop an initial solution to the OKT problem, a student knowledge-guided code generation approach, that combines program synthesis methods using language models with student knowledge tracing methods. We also conduct a series of quantitative and qualitative experiments on a real-world student code dataset to validate and demonstrate the promise of OKT.
null
null
10.18653/v1/2022.emnlp-main.254
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,384
inproceedings
sen-etal-2022-logical
Logical Neural Networks for Knowledge Base Completion with Embeddings {\&} Rules
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.255/
Sen, Prithviraj and Carvalho, Breno William and Abdelaziz, Ibrahim and Kapanipathi, Pavan and Roukos, Salim and Gray, Alexander
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3863--3875
Knowledge base completion (KBC) has benefitted greatly by learning explainable rules in an human-interpretable dialect such as first-order logic. Rule-based KBC has so far, mainly focussed on learning one of two types of rules: conjunction-of-disjunctions and disjunction-of-conjunctions. We qualitatively show, via examples, that one of these has an advantage over the other when it comes to achieving high quality KBC. To the best of our knowledge, we are the first to propose learning both kinds of rules within a common framework. To this end, we propose to utilize logical neural networks (LNN), a powerful neuro-symbolic AI framework that can express both kinds of rules and learn these end-to-end using gradient-based optimization. Our in-depth experiments show that our LNN-based approach to learning rules for KBC leads to roughly 10{\%} relative improvements, if not more, over SotA rule-based KBC methods. Moreover, by showing how to combine our proposed methods with knowledge graph embeddings we further achieve an additional 7.5{\%} relative improvement.
null
null
10.18653/v1/2022.emnlp-main.255
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,385
inproceedings
wang-etal-2022-medclip
{M}ed{CLIP}: Contrastive Learning from Unpaired Medical Images and Text
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.256/
Wang, Zifeng and Wu, Zhenbang and Agarwal, Dinesh and Sun, Jimeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3876--3887
Existing vision-text contrastive learning like CLIP aims to match the paired image and caption embeddings while pushing others apart, which improves representation transferability and supports zero-shot prediction. However, medical image-text datasets are orders of magnitude below the general images and captions from the internet. Moreover, previous methods encounter many false negatives, i.e., images and reports from separate patients probably carry the same semantics but are wrongly treated as negatives. In this paper, we decouple images and texts for multimodal contrastive learning, thus scaling the usable training data in a combinatorial magnitude with low cost. We also propose to replace the InfoNCE loss with semantic matching loss based on medical knowledge to eliminate false negatives in contrastive learning. We prove that MedCLIP is a simple yet effective framework: it outperforms state-of-the-art methods on zero-shot prediction, supervised classification, and image-text retrieval. Surprisingly, we observe that with only 20K pre-training data, MedCLIP wins over the state-of-the-art method (using 200K data). The code is available at https://github.com/RyanWangZf/MedCLIP.
null
null
10.18653/v1/2022.emnlp-main.256
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,386