entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
zhang-etal-2022-ga
{GA}-{SAM}: Gradient-Strength based Adaptive Sharpness-Aware Minimization for Improved Generalization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.257/
Zhang, Zhiyuan and Luo, Ruixuan and Su, Qi and Sun, Xu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3888--3903
Recently, Sharpness-Aware Minimization (SAM) algorithm has shown state-of-the-art generalization abilities in vision tasks. It demonstrates that flat minima tend to imply better generalization abilities. However, it has some difficulty implying SAM to some natural language tasks, especially to models with drastic gradient changes, such as RNNs. In this work, we analyze the relation between the flatness of the local minimum and its generalization ability from a novel and straightforward theoretical perspective. We propose that the shift of the training and test distributions can be equivalently seen as a virtual parameter corruption or perturbation, which can explain why flat minima that are robust against parameter corruptions or perturbations have better generalization performances. On its basis, we propose a Gradient-Strength based Adaptive Sharpness-Aware Minimization (GA-SAM) algorithm to help to learn algorithms find flat minima that generalize better. Results in various language benchmarks validate the effectiveness of the proposed GA-SAM algorithm on natural language tasks.
null
null
10.18653/v1/2022.emnlp-main.257
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,387
inproceedings
yang-etal-2022-sparse
Sparse Teachers Can Be Dense with Knowledge
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.258/
Yang, Yi and Zhang, Chen and Song, Dawei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3904--3915
Recent advances in distilling pretrained language models have discovered that, besides the expressiveness of knowledge, the student-friendliness should be taken into consideration to realize a truly knowledgeable teacher. Based on a pilot study, we find that over-parameterized teachers can produce expressive yet student-unfriendly knowledge and are thus limited in overall knowledgeableness. To remove the parameters that result in student-unfriendliness, we propose a sparse teacher trick under the guidance of an overall knowledgeable score for each teacher parameter. The knowledgeable score is essentially an interpolation of the expressiveness and student-friendliness scores. The aim is to ensure that the expressive parameters are retained while the student-unfriendly ones are removed. Extensive experiments on the GLUE benchmark show that the proposed sparse teachers can be dense with knowledge and lead to students with compelling performance in comparison with a series of competitive baselines.
null
null
10.18653/v1/2022.emnlp-main.258
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,388
inproceedings
sun-etal-2022-bbtv2
{BBT}v2: Towards a Gradient-Free Future with Large Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.259/
Sun, Tianxiang and He, Zhengfu and Qian, Hong and Zhou, Yunhua and Huang, Xuanjing and Qiu, Xipeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3916--3930
Most downstream adaptation methods tune all or part of the parameters of pre-trained models (PTMs) through gradient descent, where the tuning cost increases linearly with the growth of the model size.By contrast, gradient-free methods only require the forward computation of the PTM to tune the prompt, retaining the benefits of efficient tuning and deployment.Though, past work on gradient-free tuning often introduces gradient descent to seek a good initialization of prompt and lacks versatility across tasks and PTMs.In this paper, we present BBTv2, an improved version of Black-Box Tuning, to drive PTMs for few-shot learning.We prepend continuous prompts to every layer of the PTM and propose a divide-and-conquer gradient-free algorithm to optimize the prompts at different layers alternately.Extensive experiments across various tasks and PTMs show that BBTv2 can achieve comparable performance to full model tuning and state-of-the-art parameter-efficient methods (e.g., Adapter, LoRA, BitFit, etc.) under few-shot settings while maintaining much fewer tunable parameters.
null
null
10.18653/v1/2022.emnlp-main.259
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,389
inproceedings
zhang-etal-2022-passage
Passage-Mask: A Learnable Regularization Strategy for Retriever-Reader Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.260/
Zhang, Shujian and Gong, Chengyue and Liu, Xingchao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3931--3943
Retriever-reader models achieve competitive performance across many different NLP tasks such as open question answering and dialogue conversations. In this work, we notice these models easily overfit the top-rank retrieval passages and standard training fails to reason over the entire retrieval passages. We introduce a learnable passage mask mechanism which desensitizes the impact from the top-rank retrieval passages and prevents the model from overfitting. Controlling the gradient variance with fewer mask candidates and selecting the mask candidates with one-shot bi-level optimization, our learnable regularization strategy enforces the answer generation to focus on the entire retrieval passages. Experiments on different tasks across open question answering, dialogue conversation, and fact verification show that our method consistently outperforms its baselines. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks.
null
null
10.18653/v1/2022.emnlp-main.260
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,390
inproceedings
white-etal-2022-mixed
Mixed-effects transformers for hierarchical adaptation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.261/
White, Julia and Goodman, Noah and Hawkins, Robert
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3944--3954
Language differs dramatically from context to context. To some degree, large language models like GPT-3 account for such variation by conditioning on strings of initial input text, or prompts. However, prompting can be ineffective when contexts are sparse, out-of-sample, or extra-textual. In this paper, we introduce the mixed-effects transformer (MET), a novel approach for learning hierarchically-structured prefixes{---} lightweight modules prepended to an input sequence{---} to account for structured variation in language use. Specifically, we show how the popular class of mixed-effects regression models may be extended to transformer-based architectures using a regularized prefix-tuning procedure with dropout. We evaluate this approach on several domain-adaptation benchmarks, finding that it learns contextual variation from minimal data while generalizing well to unseen contexts.
null
null
10.18653/v1/2022.emnlp-main.261
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,391
inproceedings
zhao-etal-2022-measuring
On Measuring the Intrinsic Few-Shot Hardness of Datasets
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.262/
Zhao, Xinran and Murty, Shikhar and Manning, Christopher
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3955--3963
While advances in pre-training have led to dramatic improvements in few-shot learning of NLP tasks, there is limited understanding of what drives successful few-shot adaptation in datasets. In particular, given a new dataset and a pre-trained model, what properties of the dataset make it few-shot learnable, and are these properties independent of the specific adaptation techniques used? We consider an extensive set of recent few-shot learning methods and show that their performance across a large number of datasets is highly correlated, showing that few-shot hardness may be intrinsic to datasets, for a given pre-trained model. To estimate intrinsic few-shot hardness, we then propose a simple and lightweight metric called Spread that captures the intuition that few-shot learning is made possible by exploiting feature-space invariances between training and test samples. Our metric better accounts for few-shot hardness compared to existing notions of hardness and is {\textasciitilde}8-100x faster to compute.
null
null
10.18653/v1/2022.emnlp-main.262
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,392
inproceedings
xing-tsang-2022-group
Group is better than individual: Exploiting Label Topologies and Label Relations for Joint Multiple Intent Detection and Slot Filling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.263/
Xing, Bowen and Tsang, Ivor
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3964--3975
Recent joint multiple intent detection and slot filling models employ label embeddings to achieve the semantics-label interactions.However, they treat all labels and label embeddings as uncorrelated individuals, ignoring the dependencies among them. Besides, they conduct the decoding for the two tasks independently, without leveraging the correlations between them.Therefore, in this paper, we first construct a Heterogeneous Label Graph (HLG) containing two kinds of topologies: (1) statistical dependencies based on labels' co-occurrence patterns and hierarchies in slot labels; (2) rich relations among the label nodes.Then we propose a novel model termed ReLa-Net.It can capture beneficial correlations among the labels from HLG.The label correlations are leveraged to enhance semantic-label interactions. Moreover, we also propose the label-aware inter-dependent decoding mechanism to further exploit the label correlations for decoding. Experiment results show that our ReLa-Net significantly outperforms previous models.Remarkably, ReLa-Net surpasses the previous best model by over 20{\%} in terms of overall accuracy on MixATIS dataset.
null
null
10.18653/v1/2022.emnlp-main.263
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,393
inproceedings
gu-etal-2022-empirical
An Empirical Study on Finding Spans
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.264/
Gu, Weiwei and Zheng, Boyuan and Chen, Yunmo and Chen, Tongfei and Van Durme, Benjamin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3976--3983
We present an empirical study on methods for span finding, the selection of consecutive tokens in text for some downstream tasks. We focus on approaches that can be employed in training end-to-end information extraction systems, and find there is no definitive solution without considering task properties, and provide our observations to help with future design choices: 1) a tagging approach often yields higher precision while span enumeration and boundary prediction provide higher recall; 2) span type information can benefit a boundary prediction approach; 3) additional contextualization does not help span finding in most cases.
null
null
10.18653/v1/2022.emnlp-main.264
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,394
inproceedings
wang-etal-2022-mgdoc
{MGD}oc: Pre-training with Multi-granular Hierarchy for Document Image Understanding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.265/
Wang, Zilong and Gu, Jiuxiang and Tensmeyer, Chris and Barmpalios, Nikolaos and Nenkova, Ani and Sun, Tong and Shang, Jingbo and Morariu, Vlad
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3984--3993
Document images are a ubiquitous source of data where the text is organized in a complex hierarchical structure ranging from fine granularity (e.g., words), medium granularity (e.g., regions such as paragraphs or figures), to coarse granularity (e.g., the whole page). The spatial hierarchical relationships between content at different levels of granularity are crucial for document image understanding tasks. Existing methods learn features from either word-level or region-level but fail to consider both simultaneously. Word-level models are restricted by the fact that they originate from pure-text language models, which only encode the word-level context. In contrast, region-level models attempt to encode regions corresponding to paragraphs or text blocks into a single embedding, but they perform worse with additional word-level features. To deal with these issues, we propose MGDoc, a new multi-modal multi-granular pre-training framework that encodes page-level, region-level, and word-level information at the same time. MGDoc uses a unified text-visual encoder to obtain multi-modal features across different granularities, which makes it possible to project the multi-granular features into the same hyperspace. To model the region-word correlation, we design a cross-granular attention mechanism and specific pre-training tasks for our model to reinforce the model of learning the hierarchy between regions and words. Experiments demonstrate that our proposed model can learn better features that perform well across granularities and lead to improvements in downstream tasks.
null
null
10.18653/v1/2022.emnlp-main.265
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,395
inproceedings
huang-etal-2022-understanding
Understanding Jargon: Combining Extraction and Generation for Definition Modeling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.266/
Huang, Jie and Shao, Hanyin and Chang, Kevin Chen-Chuan and Xiong, Jinjun and Hwu, Wen-mei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
3994--4004
Can machines know what twin prime is? From the composition of this phrase, machines may guess twin prime is a certain kind of prime, but it is still difficult to deduce exactly what twin stands for without additional knowledge. Here, twin prime is a jargon - a specialized term used by experts in a particular field. Explaining jargon is challenging since it usually requires domain knowledge to understand. Recently, there is an increasing interest in extracting and generating definitions of words automatically. However, existing approaches, either extraction or generation, perform poorly on jargon. In this paper, we propose to combine extraction and generation for jargon definition modeling: first extract self- and correlative definitional information of target jargon from the Web and then generate the final definitions by incorporating the extracted definitional information. Our framework is remarkably simple but effective: experiments demonstrate our method can generate high-quality definitions for jargon and outperform state-of-the-art models significantly, e.g., BLEU score from 8.76 to 22.66 and human-annotated score from 2.34 to 4.04.
null
null
10.18653/v1/2022.emnlp-main.266
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,396
inproceedings
kim-etal-2022-prosocialdialog
{P}rosocial{D}ialog: A Prosocial Backbone for Conversational Agents
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.267/
Kim, Hyunwoo and Yu, Youngjae and Jiang, Liwei and Lu, Ximing and Khashabi, Daniel and Kim, Gunhee and Choi, Yejin and Sap, Maarten
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4005--4029
Most existing dialogue systems fail to respond properly to potentially unsafe user utterances by either ignoring or passively agreeing with them. To address this issue, we introduce ProsocialDialog, the first large-scale multi-turn dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K unique RoTs, and 497K dialogue safety labels accompanied by free-form rationales.With this dataset, we introduce a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost. Empirical results show that Prost generates more socially acceptable dialogues compared to other state-of-the-art language and dialogue models in both in-domain and out-of-domain settings. Additionally, Canary effectively guides conversational agents and off-the-shelf language models to generate significantly more prosocial responses. Our work highlights the promise and importance of creating and steering conversational AI to be socially responsible.
null
null
10.18653/v1/2022.emnlp-main.267
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,397
inproceedings
jiang-etal-2022-exploiting
Exploiting Global and Local Hierarchies for Hierarchical Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.268/
Jiang, Ting and Wang, Deqing and Sun, Leilei and Chen, Zhongzhi and Zhuang, Fuzhen and Yang, Qinghong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4030--4039
Hierarchical text classification aims to leverage label hierarchy in multi-label text classification. Existing methods encode label hierarchy in a global view, where label hierarchy is treated as the static hierarchical structure containing all labels. Since global hierarchy is static and irrelevant to text samples, it makes these methods hard to exploit hierarchical information. Contrary to global hierarchy, local hierarchy as a structured labels hierarchy corresponding to each text sample. It is dynamic and relevant to text samples, which is ignored in previous methods. To exploit global and local hierarchies, we propose Hierarchy-guided BERT with Global and Local hierarchies (HBGL), which utilizes the large-scale parameters and prior language knowledge of BERT to model both global and local hierarchies. Moreover, HBGL avoids the intentional fusion of semantic and hierarchical modules by directly modeling semantic and hierarchical information with BERT. Compared with the state-of-the-art method HGCLR, our method achieves significant improvement on three benchmark datasets.
null
null
10.18653/v1/2022.emnlp-main.268
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,398
inproceedings
wu-etal-2022-semantic
Semantic-aware Contrastive Learning for More Accurate Semantic Parsing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.269/
Wu, Shan and Xin, Chunlei and Chen, Bo and Han, Xianpei and Sun, Le
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4040--4052
Since the meaning representations are detailed and accurate annotations which express fine-grained sequence-level semtantics, it is usually hard to train discriminative semantic parsers via Maximum Likelihood Estimation (MLE) in an autoregressive fashion. In this paper, we propose a semantic-aware contrastive learning algorithm, which can learn to distinguish fine-grained meaning representations and take the overall sequence-level semantic into consideration. Specifically, a multi-level online sampling algorithm is proposed to sample confusing and diverse instances. Three semantic-aware similarity functions are designed to accurately measure the distance between meaning representations as a whole. And a ranked contrastive loss is proposed to pull the representations of the semantic-identical instances together and push negative instances away. Experiments on two standard datasets show that our approach achieves significant improvements over MLE baselines and gets state-of-the-art performances by simply applying semantic-aware contrastive learning on a vanilla Seq2Seq model.
null
null
10.18653/v1/2022.emnlp-main.269
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,399
inproceedings
chen-etal-2022-scientific
Scientific Paper Extractive Summarization Enhanced by Citation Graphs
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.270/
Chen, Xiuying and Li, Mingzhe and Gao, Shen and Yan, Rui and Gao, Xin and Zhang, Xiangliang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4053--4062
In a citation graph, adjacent paper nodes share related scientific terms and topics. The graph thus conveys unique structure information of document-level relatedness that can be utilized in the paper summarization task, for exploring beyond the intra-document information.In this work, we focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings.We first propose a Multi-granularity Unsupervised Summarization model (MUS) as a simple and low-cost solution to the task.MUS finetunes a pre-trained encoder model on the citation graph by link prediction tasks.Then, the abstract sentences are extracted from the corresponding paper considering multi-granularity information.Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework.Motivated by this, we next propose a Graph-based Supervised Summarizationmodel (GSS) to achieve more accurate results on the task when large-scale labeled data are available.Apart from employing the link prediction as an auxiliary task, GSS introduces a gated sentence encoder and a graph information fusion module to take advantage of the graph information to polish the sentence representation.Experiments on a public benchmark dataset show that MUS and GSS bring substantial improvements over the prior state-of-the-art model.
null
null
10.18653/v1/2022.emnlp-main.270
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,400
inproceedings
nguyen-etal-2022-hardness
Hardness-guided domain adaptation to recognise biomedical named entities under low-resource scenarios
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.271/
Nguyen, Ngoc Dang and Du, Lan and Buntine, Wray and Chen, Changyou and Beare, Richard
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4063--4071
Domain adaptation is an effective solution to data scarcity in low-resource scenarios. However, when applied to token-level tasks such as bioNER, domain adaptation methods often suffer from the challenging linguistic characteristics that clinical narratives possess, which leads to unsatsifactory performance. In this paper, we present a simple yet effective hardness-guided domain adaptation framework for bioNER tasks that can effectively leverage the domain hardness information to improve the adaptability of the learnt model in the low-resource scenarios. Experimental results on biomedical datasets show that our model can achieve significant performance improvement over the recently published state-of-the-art (SOTA) MetaNER model.
null
null
10.18653/v1/2022.emnlp-main.271
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,401
inproceedings
dong-etal-2022-syntactic
Syntactic Multi-view Learning for Open Information Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.272/
Dong, Kuicai and Sun, Aixin and Kim, Jung-Jae and Li, Xiaoli
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4072--4083
Open Information Extraction (OpenIE) aims to extract relational tuples from open-domain sentences. Traditional rule-based or statistical models were developed based on syntactic structure of sentence, identified by syntactic parsers. However, previous neural OpenIE models under-explored the useful syntactic information. In this paper, we model both constituency and dependency trees into word-level graphs, and enable neural OpenIE to learn from the syntactic structures. To better fuse heterogeneous information from the two graphs, we adopt multi-view learning to capture multiple relationships from them. Finally, the finetuned constituency and dependency representations are aggregated with sentential semantic representations for tuple generation. Experiments show that both constituency and dependency information, and the multi-view learning are effective.
null
null
10.18653/v1/2022.emnlp-main.272
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,402
inproceedings
jiang-etal-2022-trips
{TRIPS}: Efficient Vision-and-Language Pre-training with Text-Relevant Image Patch Selection
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.273/
Jiang, Chaoya and Xu, Haiyang and Li, Chenliang and Yan, Ming and Ye, Wei and Zhang, Shikun and Bi, Bin and Huang, Songfang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4084--4096
Vision Transformers (ViTs) have been widely used in large-scale Vision and Language Pre-training (VLP) models. Though previous VLP works have proved the effectiveness of ViTs, they still suffer from computational efficiency brought by the long visual sequence. To tackle this problem, in this paper, we propose an efficient vision-and-language pre-training model with Text-Relevant Image Patch Selection, namely TRIPS, which reduces the visual sequence progressively with a text-guided patch-selection layer in the visual backbone for efficient training and inference. The patch-selection layer can dynamically compute text-dependent visual attention to identify the attentive image tokens with text guidance and fuse inattentive ones in an end-to-end manner. Meanwhile, TRIPS does not introduce extra parameters to ViTs. Experimental results on a variety of popular benchmark datasets demonstrate that TRIPS gain a speedup of 40{\%} over previous similar VLP models, yet with competitive or better downstream task performance.
null
null
10.18653/v1/2022.emnlp-main.273
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,403
inproceedings
dai-etal-2022-cgodial
{CG}o{D}ial: A Large-Scale Benchmark for {C}hinese Goal-oriented Dialog Evaluation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.274/
Dai, Yinpei and He, Wanwei and Li, Bowen and Wu, Yuchuan and Cao, Zheng and An, Zhongqi and Sun, Jian and Li, Yongbin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4097--4111
Practical dialog systems need to deal with various knowledge sources, noisy user expressions, and the shortage of annotated data. To better solve the above problems, we propose CGoDial, a new challenging and comprehensive Chinese benchmark for multi-domain Goal-oriented Dialog evaluation. It contains 96,763 dialog sessions, and 574,949 dialog turns totally, covering three datasets with different knowledge sources: 1) a slot-based dialog (SBD) dataset with table-formed knowledge, 2) a flow-based dialog (FBD) dataset with tree-formed knowledge, and a retrieval-based dialog (RBD) dataset with candidate-formed knowledge. To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing. The proposed experimental settings include the combinations of training with either the entire training set or a few-shot training set, and testing with either the standard test set or a hard test subset, which can assess model capabilities in terms of general prediction, fast adaptability and reliable robustness.
null
null
10.18653/v1/2022.emnlp-main.274
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,404
inproceedings
gao-etal-2022-kernel
Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence Embedding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.275/
Gao, SongYang and Dou, Shihan and Zhang, Qi and Huang, Xuanjing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4112--4122
Dataset bias has attracted increasing attention recently for its detrimental effect on the generalization ability of fine-tuned models. The current mainstream solution is designing an additional shallow model to pre-identify biased instances. However, such two-stage methods scale up the computational complexity of training process and obstruct valid feature information while mitigating bias.To address this issue, we utilize the representation normalization method which aims at disentangling the correlations between features of encoded sentences. We find it also promising in eliminating the bias problem by providing isotropic data distribution. We further propose Kernel-Whitening, a Nystrom kernel approximation method to achieve more thorough debiasing on nonlinear spurious correlations. Our framework is end-to-end with similar time consumption to fine-tuning. Experiments show that Kernel-Whitening significantly improves the performance of BERT on out-of-distribution datasets while maintaining in-distribution accuracy.
null
null
10.18653/v1/2022.emnlp-main.275
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,405
inproceedings
wang-etal-2022-unified
A Unified Positive-Unlabeled Learning Framework for Document-Level Relation Extraction with Different Levels of Labeling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.276/
Wang, Ye and Liu, Xinxin and Hu, Wenxin and Zhang, Tao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4123--4135
Document-level relation extraction (RE) aims to identify relations between entities across multiple sentences. Most previous methods focused on document-level RE under full supervision. However, in real-world scenario, it is expensive and difficult to completely label all relations in a document because the number of entity pairs in document-level RE grows quadratically with the number of entities. To solve the common incomplete labeling problem, we propose a unified positive-unlabeled learning framework - shift and squared ranking loss positive-unlabeled (SSR-PU) learning. We use positive-unlabeled (PU) learning on document-level RE for the first time. Considering that labeled data of a dataset may lead to prior shift of unlabeled data, we introduce a PU learning under prior shift of training data. Also, using none-class score as an adaptive threshold, we propose squared ranking loss and prove its Bayesian consistency with multi-label ranking metrics. Extensive experiments demonstrate that our method achieves an improvement of about 14 F1 points relative to the previous baseline with incomplete labeling. In addition, it outperforms previous state-of-the-art results under both fully supervised and extremely unlabeled settings as well.
null
null
10.18653/v1/2022.emnlp-main.276
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,406
inproceedings
shridhar-etal-2022-automatic
Automatic Generation of Socratic Subquestions for Teaching Math Word Problems
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.277/
Shridhar, Kumar and Macina, Jakub and El-Assady, Mennatallah and Sinha, Tanmay and Kapur, Manu and Sachan, Mrinmaya
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4136--4149
Socratic questioning is an educational method that allows students to discover answers to complex problems by asking them a series of thoughtful questions. Generation of didactically sound questions is challenging, requiring understanding of the reasoning process involved in the problem. We hypothesize that such questioning strategy can not only enhance the human performance, but also assist the math word problem (MWP) solvers.In this work, we explore the ability of large language models (LMs) in generating sequential questions for guiding math word problem-solving. We propose various guided question generation schemes based on input conditioning and reinforcement learning.On both automatic and human quality evaluations, we find that LMs constrained with desirable question properties generate superior questions and improve the overall performance of a math word problem solver. We conduct a preliminary user study to examine the potential value of such question generation models in the education domain. Results suggest that the difficulty level of problems plays an important role in determining whether questioning improves or hinders human performance. We discuss the future of using such questioning strategies in education.
null
null
10.18653/v1/2022.emnlp-main.277
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,407
inproceedings
zhang-etal-2022-mixture
Mixture of Attention Heads: Selecting Attention Heads Per Token
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.278/
Zhang, Xiaofeng and Shen, Yikang and Huang, Zeyu and Zhou, Jie and Rong, Wenge and Xiong, Zhang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4150--4162
Mixture-of-Experts (MoE) networks have been proposed as an efficient way to scale up model capacity and implement conditional computing. However, the study of MoE components mostly focused on the feedforward layer in Transformer architecture. This paper proposes the Mixture of Attention Heads (MoA), a new architecture that combines multi-head attention with the MoE mechanism. MoA includes a set of attention heads that each has its own set of parameters. Given an input, a router dynamically selects a subset of k attention heads per token. This conditional computation schema allows MoA to achieve stronger performance than the standard multi-head attention layer. Furthermore, the sparsely gated MoA can easily scale up the number of attention heads and the number of parameters while preserving computational efficiency. Despite performance improvements, MoA also automatically differentiates heads' utilities, providing a new perspective to discuss the model`s interpretability. We conducted experiments on several important tasks, including Machine Translation and Masked Language Modeling. Experiments have shown promising results on several tasks against strong baselines that involve large and very deep models.
null
null
10.18653/v1/2022.emnlp-main.278
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,408
inproceedings
kurtic-etal-2022-optimal
The Optimal {BERT} Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.279/
Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4163--4181
In this paper, we consider the problem of sparsifying BERT models, which are a key building block for natural language processing, in order to reduce their storage and computational cost. We introduce the Optimal BERT Surgeon (oBERT), an efficient and accurate pruning method based on approximate second-order information, which we show to yield state-of-the-art results in both stages of language tasks: pre-training and fine-tuning. Specifically, oBERT extends existing work on second-order pruning by allowing for pruning weight blocks, and is the first such method that is applicable at BERT scale. Second, we investigate compounding compression approaches to obtain highly compressed but accurate models for deployment on edge devices. These models significantly push boundaries of the current state-of-the-art sparse BERT models with respect to all metrics: model size, inference speed and task accuracy. For example, relative to the dense BERT-base, we obtain 10x model size compression with {\ensuremath{<}} 1{\%} accuracy drop, 10x CPU-inference speedup with {\ensuremath{<}} 2{\%} accuracy drop, and 29x CPU-inference speedup with {\ensuremath{<}} 7.5{\%} accuracy drop. Our code, fully integrated with Transformers and SparseML, is available at https://github.com/neuralmagic/sparseml/tree/main/research/optimal{\_}BERT{\_}surgeon{\_}oBERT.
null
null
10.18653/v1/2022.emnlp-main.279
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,409
inproceedings
yoon-etal-2022-information
Information-Theoretic Text Hallucination Reduction for Video-grounded Dialogue
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.280/
Yoon, Sunjae and Yoon, Eunseop and Yoon, Hee Suk and Kim, Junyeong and Yoo, Chang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4182--4193
Video-grounded Dialogue (VGD) aims to decode an answer sentence to a question regarding a given video and dialogue context. Despite the recent success of multi-modal reasoning to generate answer sentences, existing dialogue systems still suffer from a text hallucination problem, which denotes indiscriminate text-copying from input texts without an understanding of the question. This is due to learning spurious correlations from the fact that answer sentences in the dataset usually include the words of input texts, thus the VGD system excessively relies on copying words from input texts by hoping those words to overlap with ground-truth texts. Hence, we design Text Hallucination Mitigating (THAM) framework, which incorporates Text Hallucination Regularization (THR) loss derived from the proposed information-theoretic text hallucination measurement approach. Applying THAM with current dialogue systems validates the effectiveness on VGD benchmarks (i.e., AVSD@DSTC7 and AVSD@DSTC8) and shows enhanced interpretability.
null
null
10.18653/v1/2022.emnlp-main.280
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,410
inproceedings
guo-etal-2022-dsm
{DSM}: Question Generation over Knowledge Base via Modeling Diverse Subgraphs with Meta-learner
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.281/
Guo, Shasha and Zhang, Jing and Wang, Yanling and Zhang, Qianyi and Li, Cuiping and Chen, Hong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4194--4207
Existing methods on knowledge base question generation (KBQG) learn a one-size-fits-all model by training together all subgraphs without distinguishing the diverse semantics of subgraphs. In this work, we show that making use of the past experience on semantically similar subgraphs can reduce the learning difficulty and promote the performance of KBQG models. To achieve this, we propose a novel approach to model diverse subgraphs with meta-learner (DSM). Specifically, we devise a graph contrastive learning-based retriever to identify semantically similar subgraphs, so that we can construct the semantics-aware learning tasks for the meta-learner to learn semantics-specific and semantics-agnostic knowledge on and across these tasks. Extensive experiments on two widely-adopted benchmarks for KBQG show that DSM derives new state-of-the-art performance and benefits the question answering tasks as a means of data augmentation.
null
null
10.18653/v1/2022.emnlp-main.281
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,411
inproceedings
zhang-etal-2022-relu
{R}el{U}-Net: Syntax-aware Graph {U}-Net for Relational Triple Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.282/
Zhang, Yunqi and Chen, Yubo and Huang, Yongfeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4208--4217
Relational triple extraction is a critical task for natural language processing. Existing methods mainly focused on capturing semantic information, but suffered from ignoring the syntactic structure of the sentence, which is proved in the relation classification task to contain rich relational information. This is due to the absence of entity locations, which is the prerequisite for pruning noisy edges from the dependency tree, when extracting relational triples. In this paper, we propose a unified framework to tackle this challenge and incorporate syntactic information for relational triple extraction. First, we propose to automatically contract the dependency tree into a core relational topology and eliminate redundant information with graph pooling operations. Then, we propose a symmetrical expanding path with graph unpooling operations to fuse the contracted core syntactic interactions with the original sentence context. We also propose a bipartite graph matching objective function to capture the reflections between the core topology and golden relational facts. Since our model shares similar contracting and expanding paths with encoder-decoder models like U-Net, we name our model as Relation U-Net (RelU-Net). We conduct experiments on several datasets and the results prove the effectiveness of our method.
null
null
10.18653/v1/2022.emnlp-main.282
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,412
inproceedings
bassignana-etal-2022-evidence
Evidence {\ensuremath{>}} Intuition: Transferability Estimation for Encoder Selection
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.283/
Bassignana, Elisa and M{\"uller-Eberstein, Max and Zhang, Mike and Plank, Barbara
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4218--4227
With the increase in availability of large pre-trained language models (LMs) in Natural Language Processing (NLP), it becomes critical to assess their fit for a specific target task a priori{---}as fine-tuning the entire space of available LMs is computationally prohibitive and unsustainable. However, encoder transferability estimation has received little to no attention in NLP. In this paper, we propose to generate quantitative evidence to predict which LM, out of a pool of models, will perform best on a target task without having to fine-tune all candidates. We provide a comprehensive study on LM ranking for 10 NLP tasks spanning the two fundamental problem types of classification and structured prediction. We adopt the state-of-the-art Logarithm of Maximum Evidence (LogME) measure from Computer Vision (CV) and find that it positively correlates with final LM performance in 94{\%} of the setups.In the first study of its kind, we further compare transferability measures with the de facto standard of human practitioner ranking, finding that evidence from quantitative metrics is more robust than pure intuition and can help identify unexpected LM candidates.
null
null
10.18653/v1/2022.emnlp-main.283
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,413
inproceedings
martins-etal-2022-chunk
Chunk-based Nearest Neighbor Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.284/
Martins, Pedro Henrique and Marinho, Zita and Martins, Andr{\'e} F. T.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4228--4245
Semi-parametric models, which augment generation with retrieval, have led to impressive results in language modeling and machine translation, due to their ability to retrieve fine-grained information from a datastore of examples. One of the most prominent approaches, kNN-MT, exhibits strong domain adaptation capabilities by retrieving tokens from domain-specific datastores (Khandelwal et al., 2021). However, kNN-MT requires an expensive retrieval operation for every single generated token, leading to a very low decoding speed (around 8 times slower than a parametric model). In this paper, we introduce a chunk-based kNN-MT model which retrieves chunks of tokens from the datastore, instead of a single token. We propose several strategies for incorporating the retrieved chunks into the generation process, and for selecting the steps at which the model needs to search for neighbors in the datastore. Experiments on machine translation in two settings, static and {\textquotedblleft}on-the-fly{\textquotedblright} domain adaptation, show that the chunk-based kNN-MT model leads to significant speed-ups (up to 4 times) with only a small drop in translation quality.
null
null
10.18653/v1/2022.emnlp-main.284
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,414
inproceedings
kedia-etal-2022-fie
{F}i{E}: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.285/
Kedia, Akhil and Zaidi, Mohd Abbas and Lee, Haejun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4246--4260
Generative models have recently started to outperform extractive models in Open Domain Question Answering, largely by leveraging their decoder to attend over multiple encoded passages and combining their information. However, generative models tend to be larger than extractive models due to the need for a decoder, run slower during inference due to auto-regressive decoder beam search, and their generated output often suffers from hallucinations. We propose to extend transformer encoders with the ability to fuse information from multiple passages, using global representation to provide cross-sample attention over all tokens across samples. Furthermore, we propose an alternative answer span probability calculation to better aggregate answer scores in the global space of all samples. Using our proposed method, we outperform the current state-of-the-art method by 2.5 Exact Match score on the Natural Question dataset while using only 25{\%} of parameters and 35{\%} of the latency during inference, and 4.4 Exact Match on WebQuestions dataset. When coupled with synthetic data augmentation, we outperform larger models on the TriviaQA dataset as well. The latency and parameter savings of our method make it particularly attractive for open-domain question answering, as these models are often compute-intensive.
null
null
10.18653/v1/2022.emnlp-main.285
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,415
inproceedings
pan-etal-2022-inductive
Inductive Relation Prediction with Logical Reasoning Using Contrastive Representations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.286/
Pan, Yudai and Liu, Jun and Zhang, Lingling and Zhao, Tianzhe and Lin, Qika and Hu, Xin and Wang, Qianying
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4261--4274
Relation prediction in knowledge graphs (KGs) aims at predicting missing relations in incomplete triples, whereas the dominant embedding paradigm has a restriction on handling unseen entities during testing. In the real-world scenario, the inductive setting is more common because entities in the training process are finite. Previous methods capture an inductive ability by implicit logic in KGs. However, it would be challenging to preciously acquire entity-independent relational semantics of compositional logic rules and to deal with the deficient supervision of logic caused by the scarcity of relational semantics. To this end, we propose a novel graph convolutional network (GCN)-based model LogCo with logical reasoning by contrastive representations. LogCo firstly extracts enclosing subgraphs and relational paths between two entities to supply the entity-independence. Then a contrastive strategy for relational path instances and the subgraph is proposed for the issue of deficient supervision. The contrastive representations are learned for a joint training regime. Finally, prediction results and logic rules for reasoning are attained. Comprehensive experiments on twelve inductive datasets show that LogCo achieves outstanding performance comparing with state-of-the-art inductive relation prediction baselines.
null
null
10.18653/v1/2022.emnlp-main.286
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,416
inproceedings
li-etal-2022-improving-chinese
Improving {C}hinese Spelling Check by Character Pronunciation Prediction: The Effects of Adaptivity and Granularity
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.287/
Li, Jiahao and Wang, Quan and Mao, Zhendong and Guo, Junbo and Yang, Yanyan and Zhang, Yongdong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4275--4286
Chinese spelling check (CSC) is a fundamental NLP task that detects and corrects spelling errors in Chinese texts. As most of these spelling errors are caused by phonetic similarity, effectively modeling the pronunciation of Chinese characters is a key factor for CSC. In this paper, we consider introducing an auxiliary task of Chinese pronunciation prediction (CPP) to improve CSC, and, for the first time, systematically discuss the adaptivity and granularity of this auxiliary task. We propose SCOPE which builds upon a shared encoder two parallel decoders, one for the primary CSC task and the other for a fine-grained auxiliary CPP task, with a novel adaptive weighting scheme to balance the two tasks. In addition, we design a delicate iterative correction strategy for further improvements during inference. Empirical evaluation shows that SCOPE achieves new state-of-the-art on three CSC benchmarks, demonstrating the effectiveness and superiority of the auxiliary CPP task. Comprehensive ablation studies further verify the positive effects of adaptivity and granularity of the task.
null
null
10.18653/v1/2022.emnlp-main.287
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,417
inproceedings
currey-etal-2022-mt
{MT}-{G}en{E}val: A Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.288/
Currey, Anna and Nadejde, Maria and Pappagari, Raghavendra Reddy and Mayer, Mia and Lauly, Stanislas and Niu, Xing and Hsu, Benjamin and Dinu, Georgiana
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4287--4299
As generic machine translation (MT) quality has improved, the need for targeted benchmarks that explore fine-grained aspects of quality has increased. In particular, gender accuracy in translation can have implications in terms of output fluency, translation accuracy, and ethics. In this paper, we introduce MT-GenEval, a benchmark for evaluating gender accuracy in translation from English into eight widely-spoken languages. MT-GenEval complements existing benchmarks by providing realistic, gender-balanced, counterfactual data in eight language pairs where the gender of individuals is unambiguous in the input segment, including multi-sentence segments requiring inter-sentential gender agreement. Our data and code is publicly available under a CC BY SA 3.0 license.
null
null
10.18653/v1/2022.emnlp-main.288
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,418
inproceedings
chen-etal-2022-span
A Span-level Bidirectional Network for Aspect Sentiment Triplet Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.289/
Chen, Yuqi and Keming, Chen and Sun, Xian and Zhang, Zequn
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4300--4309
Aspect Sentiment Triplet Extraction (ASTE) is a new fine-grained sentiment analysis task that aims to extract triplets of aspect terms, sentiments, and opinion terms from review sentences. Recently, span-level models achieve gratifying results on ASTE task by taking advantage of the predictions of all possible spans. Since all possible spans significantly increases the number of potential aspect and opinion candidates, it is crucial and challenging to efficiently extract the triplet elements among them. In this paper, we present a span-level bidirectional network which utilizes all possible spans as input and extracts triplets from spans bidirectionally. Specifically, we devise both the aspect decoder and opinion decoder to decode the span representations and extract triples from aspect-to-opinion and opinion-to-aspect directions. With these two decoders complementing with each other, the whole network can extract triplets from spans more comprehensively. Moreover, considering that mutual exclusion cannot be guaranteed between the spans, we design a similar span separation loss to facilitate the downstream task of distinguishing the correct span by expanding the KL divergence of similar spans during the training process; in the inference process, we adopt an inference strategy to remove conflicting triplets from the results base on their confidence scores. Experimental results show that our framework not only significantly outperforms state-of-the-art methods, but achieves better performance in predicting triplets with multi-token entities and extracting triplets in sentences contain multi-triplets.
null
null
10.18653/v1/2022.emnlp-main.289
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,419
inproceedings
ahuja-etal-2022-calibration
On the Calibration of Massively Multilingual Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.290/
Ahuja, Kabir and Sitaram, Sunayana and Dandapat, Sandipan and Choudhury, Monojit
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4310--4323
Massively Multilingual Language Models (MMLMs) have recently gained popularity due to their surprising effectiveness in cross-lingual transfer. While there has been much work in evaluating these models for their performance on a variety of tasks and languages, little attention has been paid on how well calibrated these models are with respect to the confidence in their predictions. We first investigate the calibration of MMLMs in the zero-shot setting and observe a clear case of miscalibration in low-resource languages or those which are typologically diverse from English. Next, we empirically show that calibration methods like temperature scaling and label smoothing do reasonably well in improving calibration in the zero-shot scenario. We also find that few-shot examples in the language can further help reduce calibration errors, often substantially. Overall, our work contributes towards building more reliable multilingual models by highlighting the issue of their miscalibration, understanding what language and model-specific factors influence it, and pointing out the strategies to improve the same.
null
null
10.18653/v1/2022.emnlp-main.290
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,420
inproceedings
hu-etal-2022-momentum
Momentum Contrastive Pre-training for Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.291/
Hu, Minda and Li, Muzhi and Wang, Yasheng and King, Irwin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4324--4330
Existing pre-training methods for extractive Question Answering (QA) generate cloze-like queries different from natural questions in syntax structure, which could overfit pre-trained models to simple keyword matching. In order to address this problem, we propose a novel Momentum Contrastive pRe-training fOr queStion anSwering (MCROSS) method for extractive QA. Specifically, MCROSS introduces a momentum contrastive learning framework to align the answer probability between cloze-like and natural query-passage sample pairs. Hence, the pre-trained models can better transfer the knowledge learned in cloze-like samples to answering natural questions. Experimental results on three benchmarking QA datasets show that our method achieves noticeable improvement compared with all baselines in both supervised and zero-shot scenarios.
null
null
10.18653/v1/2022.emnlp-main.291
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,421
inproceedings
zeldes-etal-2022-second
A Second Wave of {UD} {H}ebrew Treebanking and Cross-Domain Parsing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.292/
Zeldes, Amir and Howell, Nick and Ordan, Noam and Ben Moshe, Yifat
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4331--4344
Foundational Hebrew NLP tasks such as segmentation, tagging and parsing, have relied to date on various versions of the Hebrew Treebank (HTB, Sima`an et al. 2001). However, the data in HTB, a single-source newswire corpus, is now over 30 years old, and does not cover many aspects of contemporary Hebrew on the web. This paper presents a new, freely available UD treebank of Hebrew stratified from a range of topics selected from Hebrew Wikipedia. In addition to introducing the corpus and evaluating the quality of its annotations, we deploy automatic validation tools based on grew (Guillaume, 2021), and conduct the first cross domain parsing experiments in Hebrew. We obtain new state-of-the-art (SOTA) results on UD NLP tasks, using a combination of the latest language modelling and some incremental improvements to existing transformer based approaches. We also release a new version of the UD HTB matching annotation scheme updates from our new corpus.
null
null
10.18653/v1/2022.emnlp-main.292
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,422
inproceedings
friedman-etal-2022-finding
Finding Dataset Shortcuts with Grammar Induction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.293/
Friedman, Dan and Wettig, Alexander and Chen, Danqi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4345--4363
Many NLP datasets have been found to contain shortcuts: simple decision rules that achieve surprisingly high accuracy. However, it is difficult to discover shortcuts automatically. Prior work on automatic shortcut detection has focused on enumerating features like unigrams or bigrams, which can find only low-level shortcuts, or relied on post-hoc model interpretability methods like saliency maps, which reveal qualitative patterns without a clear statistical interpretation. In this work, we propose to use probabilistic grammars to characterize and discover shortcuts in NLP datasets. Specifically, we use a context-free grammar to model patterns in sentence classification datasets and use a synchronous context-free grammar to model datasets involving sentence pairs. The resulting grammars reveal interesting shortcut features in a number of datasets, including both simple and high-level features, and automatically identify groups of test examples on which conventional classifiers fail. Finally, we show that the features we discover can be used to generate diagnostic contrast examples and incorporated into standard robust optimization methods to improve worst-group accuracy.
null
null
10.18653/v1/2022.emnlp-main.293
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,423
inproceedings
yu-etal-2022-retrieval
Retrieval Augmentation for Commonsense Reasoning: A Unified Approach
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.294/
Yu, Wenhao and Zhu, Chenguang and Zhang, Zhihan and Wang, Shuohang and Zhang, Zhuosheng and Fang, Yuwei and Jiang, Meng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4364--4377
A common thread of retrieval-augmented methods in the existing literature focuses on retrieving encyclopedic knowledge, such as Wikipedia, which facilitates well-defined entity and relation spaces that can be modeled. However, applying such methods to commonsense reasoning tasks faces two unique challenges, i.e., the lack of a general large-scale corpus for retrieval and a corresponding effective commonsense retriever. In this paper, we systematically investigate how to leverage commonsense knowledge retrieval to improve commonsense reasoning tasks. We proposed a unified framework of retrieval-augmented commonsense reasoning (called RACo), including a newly constructed commonsense corpus with over 20 million documents and novel strategies for training a commonsense retriever. We conducted experiments on four different commonsense reasoning tasks. Extensive evaluation results showed that our proposed RACo can significantly outperform other knowledge-enhanced method counterparts, achieving new SoTA performance on the CommonGen and CREAK leaderboards.
null
null
10.18653/v1/2022.emnlp-main.294
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,424
inproceedings
bai-etal-2022-open
Open World Classification with Adaptive Negative Samples
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.295/
Bai, Ke and Wang, Guoyin and Li, Jiwei and Park, Sunghyun and Lee, Sungjin and Xu, Puyang and Henao, Ricardo and Carin, Lawrence
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4378--4392
Open world classification is a task in natural language processing with key practical relevance and impact.Since the open or unknown category data only manifests in the inference phase, finding a model with a suitable decision boundary accommodating for the identification of known classes and discrimination of the open category is challenging.The performance of existing models is limited by the lack of effective open category data during the training stage or the lack of a good mechanism to learn appropriate decision boundaries.We propose an approach based on Adaptive Negative Samples (ANS) designed to generate effective synthetic open category samples in the training stage and without requiring any prior knowledge or external datasets.Empirically, we find a significant advantage in using auxiliary one-versus-rest binary classifiers, which effectively utilize the generated negative samples and avoid the complex threshold-seeking stage in previous works.Extensive experiments on three benchmark datasets show that ANS achieves significant improvements over state-of-the-art methods.
null
null
10.18653/v1/2022.emnlp-main.295
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,425
inproceedings
yang-etal-2022-re3
Re3: Generating Longer Stories With Recursive Reprompting and Revision
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.296/
Yang, Kevin and Tian, Yuandong and Peng, Nanyun and Klein, Dan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4393--4479
We consider the problem of automatically generating longer stories of over two thousand words. Compared to prior work on shorter stories, long-range plot coherence and relevance are more central challenges here. We propose the Recursive Reprompting and Revision framework (Re3) to address these challenges by (a) prompting a general-purpose language model to construct a structured overarching plan, and (b) generating story passages by repeatedly injecting contextual information from both the plan and current story state into a language model prompt. We then revise by (c) reranking different continuations for plot coherence and premise relevance, and finally (d) editing the best continuation for factual consistency. Compared to similar-length stories generated directly from the same base model, human evaluators judged substantially more of Re3`s stories as having a coherent overarching plot (by 14{\%} absolute increase), and relevant to the given initial premise (by 20{\%}).
null
null
10.18653/v1/2022.emnlp-main.296
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,426
inproceedings
tran-etal-2022-joint
Does Joint Training Really Help Cascaded Speech Translation?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.297/
Tran, Viet Anh Khoa and Thulke, David and Gao, Yingbo and Herold, Christian and Ney, Hermann
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4480--4487
Currently, in speech translation, the straightforward approach - cascading a recognition system with a translation system - delivers state-of-the-art results.However, fundamental challenges such as error propagation from the automatic speech recognition system still remain.To mitigate these problems, recently, people turn their attention to direct data and propose various joint training methods.In this work, we seek to answer the question of whether joint training really helps cascaded speech translation.We review recent papers on the topic and also investigate a joint training criterion by marginalizing the transcription posterior probabilities.Our findings show that a strong cascaded baseline can diminish any improvements obtained using joint training, and we suggest alternatives to joint training.We hope this work can serve as a refresher of the current speech translation landscape, and motivate research in finding more efficient and creative ways to utilize the direct data for speech translation.
null
null
10.18653/v1/2022.emnlp-main.297
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,427
inproceedings
adelani-etal-2022-masakhaner
{M}asakha{NER} 2.0: {A}frica-centric Transfer Learning for Named Entity Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.298/
Adelani, David Ifeoluwa and Neubig, Graham and Ruder, Sebastian and Rijhwani, Shruti and Beukman, Michael and Palen-Michel, Chester and Lignos, Constantine and Alabi, Jesujoba O. and Muhammad, Shamsuddeen H. and Nabende, Peter and Dione, Cheikh M. Bamba and Bukula, Andiswa and Mabuya, Rooweither and Dossou, Bonaventure F. P. and Sibanda, Blessing and Buzaaba, Happy and Mukiibi, Jonathan and Kalipe, Godson and Mbaye, Derguene and Taylor, Amelia and Kabore, Fatoumata and Emezue, Chris Chinenye and Aremu, Anuoluwapo and Ogayo, Perez and Gitau, Catherine and Munkoh-Buabeng, Edwin and Memdjokam Koagne, Victoire and Tapo, Allahsera Auguste and Macucwa, Tebogo and Marivate, Vukosi and Mboning, Elvis and Gwadabe, Tajuddeen and Adewumi, Tosin and Ahia, Orevaoghene and Nakatumba-Nabende, Joyce and Mokono, Neo L. and Ezeani, Ignatius and Chukwuneke, Chiamaka and Adeyemi, Mofetoluwa and Hacheme, Gilles Q. and Abdulmumim, Idris and Ogundepo, Odunayo and Yousuf, Oreen and Moteu Ngoli, Tatiana and Klakow, Dietrich
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4488--4508
African languages are spoken by over a billion people, but they are under-represented in NLP research and development. Multiple challenges exist, including the limited availability of annotated training and evaluation datasets as well as the lack of understanding of which settings, languages, and recently proposed methods like cross-lingual transfer will be effective. In this paper, we aim to move towards solutions for these challenges, focusing on the task of named entity recognition (NER). We present the creation of the largest to-date human-annotated NER dataset for 20 African languages. We study the behaviour of state-of-the-art cross-lingual transfer methods in an Africa-centric setting, empirically demonstrating that the choice of source transfer language significantly affects performance. While much previous work defaults to using English as the source language, our results show that choosing the best transfer language improves zero-shot F1 scores by an average of 14{\%} over 20 languages as compared to using English.
null
null
10.18653/v1/2022.emnlp-main.298
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,428
inproceedings
benotti-blackburn-2022-ethics
Ethics consideration sections in natural language processing papers
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.299/
Benotti, Luciana and Blackburn, Patrick
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4509--4516
In this paper, we present the results of a manual classification of all ethical consideration sections for ACL 2021. We also compare how many papers had an ethics consideration section per track and per world region in ACL 2021. We classified papers according to the ethical issues covered (research benefits, potential harms, and vulnerable groups affected) and whether the paper was marked as requiring ethics review by at least one reviewer. Moreover, we discuss recurring obstacles we have observed (highlighting some interesting texts we found along the way) and conclude with three suggestions. We think that this paper may be useful for anyone who needs to write {---} or review {---} an ethics section and would like to get an overview of what others have done.
null
null
10.18653/v1/2022.emnlp-main.299
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,429
inproceedings
wu-etal-2022-continued
Continued Pretraining for Better Zero- and Few-Shot Promptability
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.300/
Wu, Zhaofeng and Logan IV, Robert L and Walsh, Pete and Bhagia, Akshita and Groeneveld, Dirk and Singh, Sameer and Beltagy, Iz
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4517--4531
Recently introduced language model prompting methods can achieve high accuracy in zero- and few-shot settings while requiring few to no learned task-specific parameters. Nevertheless, these methods still often trail behind full model finetuning. In this work, we investigate if a dedicated continued pretraining stage could improve {\textquotedblleft}promptability{\textquotedblright}, i.e., zero-shot performance with natural language prompts or few-shot performance with prompt tuning. We reveal settings where existing continued pretraining methods lack promptability. We also identify current methodological gaps, which we fill with thorough large-scale experiments. We demonstrate that a simple recipe, continued pretraining that incorporates a trainable prompt during multi-task learning, leads to improved promptability in both zero- and few-shot settings compared to existing methods, up to 31{\%} relative. On the other hand, we find that continued pretraining using MAML-style meta-learning, a method that directly optimizes few-shot promptability, yields subpar performance. We validate our findings with two prompt tuning methods, and, based on our results, we provide concrete recommendations to optimize promptability for different use cases.
null
null
10.18653/v1/2022.emnlp-main.300
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,430
inproceedings
kuznia-etal-2022-less
Less is More: Summary of Long Instructions is Better for Program Synthesis
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.301/
Kuznia, Kirby and Mishra, Swaroop and Parmar, Mihir and Baral, Chitta
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4532--4552
Despite the success of large pre-trained language models (LMs) such as Codex, they show below-par performance on the larger and more complicated programming related questions. We show that LMs benefit from the summarized version of complicated questions. Our findings show that superfluous information often present in problem description such as human characters, background stories, and names (which are included to help humans in understanding a task) does not help models in understanding a task. To this extent, we create a meta-dataset from the frequently used APPS dataset and the newly created CodeContests dataset for the program synthesis task. Our meta-dataset consists of human and synthesized summaries of the long and complicated programming questions. Experimental results on Codex show that our proposed approach outperforms baseline by 8.13{\%} on the APPS dataset and 11.88{\%} on the CodeContests dataset on an average in terms of strict accuracy. Our analysis shows that summaries significantly improve performance for introductory (9.86{\%}) and interview (11.48{\%}) related programming questions. However, it shows improvement by a small margin ( 2{\%}) for competitive programming questions, implying the scope for future research direction.
null
null
10.18653/v1/2022.emnlp-main.301
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,431
inproceedings
patel-etal-2022-question
Is a Question Decomposition Unit All We Need?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.302/
Patel, Pruthvi and Mishra, Swaroop and Parmar, Mihir and Baral, Chitta
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4553--4569
Large Language Models (LMs) have achieved state-of-the-art performance on many Natural Language Processing (NLP) benchmarks. With the growing number of new benchmarks, we build bigger and more complex LMs. However, building new LMs may not be an ideal option owing to the cost, time and environmental impact associated with it. We explore an alternative route: can we modify data by expressing it in terms of the model`s strengths, so that a question becomes easier for models to answer? We investigate if humans can decompose a hard question into a set of simpler questions that are relatively easier for models to solve. We analyze a range of datasets involving various forms of reasoning and find that it is indeed possible to significantly improve model performance (24{\%} for GPT3 and 29{\%} for RoBERTa-SQuAD along with a symbolic calculator) via decomposition. Our approach provides a viable option to involve people in NLP research in a meaningful way. Our findings indicate that Human-in-the-loop Question Decomposition (HQD) can potentially provide an alternate path to building large LMs.
null
null
10.18653/v1/2022.emnlp-main.302
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,432
inproceedings
ghazvininejad-etal-2022-discourse
Discourse-Aware Soft Prompting for Text Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.303/
Ghazvininejad, Marjan and Karpukhin, Vladimir and Gor, Vera and Celikyilmaz, Asli
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4570--4589
Current efficient fine-tuning methods(e.g., adapters, prefix-tuning, etc.) have optimized conditional text generation via training a small set of extra parameters of the neural language model, while freezing the rest for efficiency. While showing strong performance on some generation tasks, they don`t generalize across all generation tasks. We show that soft-prompt based conditional text generation can be improved with simple and efficient methods that simulate modeling the discourse structure of human written text.We investigate two design choices: First, we apply hierarchical blocking on the prefix parameters to simulate a higher-level discourse structure of human written text. Second, we apply attention sparsity on the prefix parameters at different layers of the network and learn sparse transformations on the softmax-function. We show that structured design of prefix parameters yields more coherent, faithful and relevant generations than the baseline prefix-tuning on all generation tasks.
null
null
10.18653/v1/2022.emnlp-main.303
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,433
inproceedings
sun-etal-2022-expunations
{E}x{PUN}ations: Augmenting Puns with Keywords and Explanations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.304/
Sun, Jiao and Narayan-Chen, Anjali and Oraby, Shereen and Cervone, Alessandra and Chung, Tagyoung and Huang, Jing and Liu, Yang and Peng, Nanyun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4590--4605
The tasks of humor understanding and generation are challenging and subjective even for humans, requiring commonsense and real-world knowledge to master. Puns, in particular, add the challenge of fusing that knowledge with the ability to interpret lexical-semantic ambiguity. In this paper, we present the ExPUNations (ExPUN) dataset, in which we augment an existing dataset of puns with detailed crowdsourced annotations of keywords denoting the most distinctive words that make the text funny, pun explanations describing why the text is funny, and fine-grained funniness ratings. This is the first humor dataset with such extensive and fine-grained annotations specifically for puns. Based on these annotations, we propose two tasks: explanation generation to aid with pun classification and keyword-conditioned pun generation, to challenge the current state-of-the-art natural language understanding and generation models' ability to understand and generate humor. We showcase that the annotated keywords we collect are helpful for generating better novel humorous texts in human evaluation, and that our natural language explanations can be leveraged to improve both the accuracy and robustness of humor classifiers.
null
null
10.18653/v1/2022.emnlp-main.304
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,434
inproceedings
song-etal-2022-sling
{SLING}: {S}ino Linguistic Evaluation of Large Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.305/
Song, Yixiao and Krishna, Kalpesh and Bhatt, Rajesh and Iyyer, Mohit
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4606--4634
To understand what kinds of linguistic knowledge are encoded by pretrained Chinese language models (LMs), we introduce the benchmark of Sino LINGuistics (SLING), which consists of 38K minimal sentence pairs in Mandarin Chinese grouped into 9 high-level linguistic phenomena. Each pair demonstrates the acceptability contrast of a specific syntactic or semantic phenomenon (e.g., The keys are lost vs. The keys is lost), and an LM should assign lower perplexity to the acceptable sentence. In contrast to the CLiMP dataset (Xiang et al., 2021), which also contains Chinese minimal pairs and was created by translating the vocabulary of the English BLiMP dataset, the minimal pairs in SLING are derived primarily by applying syntactic and lexical transformations to naturally-occurring, linguist-annotated sentences from the Chinese Treebank 9.0, thus addressing severe issues in CLiMP`s data generation process. We test 18 publicly available pretrained monolingual (e.g., BERT-base-zh, CPM) and multi-lingual (e.g., mT5, XLM) language models on SLING. Our experiments show that the average accuracy for LMs is far below human performance (69.7{\%} vs. 97.1{\%}), while BERT-base-zh achieves the highest accuracy (84.8{\%}) of all tested LMs, even much larger ones. Additionally, we find that most LMs have a strong gender and number (singular/plural) bias, and they perform better on local phenomena than hierarchical ones.
null
null
10.18653/v1/2022.emnlp-main.305
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,435
inproceedings
sun-etal-2022-context
Context-Situated Pun Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.306/
Sun, Jiao and Narayan-Chen, Anjali and Oraby, Shereen and Gao, Shuyang and Chung, Tagyoung and Huang, Jing and Liu, Yang and Peng, Nanyun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4635--4648
Previous work on pun generation commonly begins with a given pun word (a pair of homophones for heterographic pun generation and a polyseme for homographic pun generation) and seeks to generate an appropriate pun. While this may enable efficient pun generation, we believe that a pun is most entertaining if it fits appropriately within a given context, e.g., a given situation or dialogue. In this work, we propose a new task, context-situated pun generation, where a specific context represented by a set of keywords is provided, and the task is to first identify suitable pun words that are appropriate for the context, then generate puns based on the context keywords and the identified pun words. We collect a new dataset, CUP (Context-sitUated Pun), containing 4.5k tuples of context words and pun pairs. Based on the new data and setup, we propose a pipeline system for context-situated pun generation, including a pun word retrieval module that identifies suitable pun words for a given context, and a pun generation module that generates puns from context keywords and pun words. Human evaluation shows that 69{\%} of our top retrieved pun words can be used to generate context-situated puns, and our generation module yields successful puns 31{\%} of the time given a plausible tuple of context words and pun pair, almost tripling the yield of a state-of-the-art pun generation model. With an end-to-end evaluation, our pipeline system with the top-1 retrieved pun pair for a given context can generate successful puns 40{\%} of the time, better than all other modeling variations but 32{\%} lower than the human success rate. This highlights the difficulty of the task, and encourages more research in this direction.
null
null
10.18653/v1/2022.emnlp-main.306
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,436
inproceedings
du-ji-2022-retrieval
Retrieval-Augmented Generative Question Answering for Event Argument Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.307/
Du, Xinya and Ji, Heng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4649--4666
Event argument extraction has long been studied as a sequential prediction problem with extractive-based methods, tackling each argument in isolation. Although recent work proposes generation-based methods to capture cross-argument dependency, they require generating and post-processing a complicated target sequence (template). Motivated by these observations and recent pretrained language models' capabilities of learning from demonstrations. We propose a retrieval-augmented generative QA model (R-GQA) for event argument extraction. It retrieves the most similar QA pair and augments it as prompt to the current example`s context, then decodes the arguments as answers. Our approach outperforms substantially prior methods across various settings (i.e. fully supervised, domain transfer, and fewshot learning). Finally, we propose a clustering-based sampling strategy (JointEnc) and conduct a thorough analysis of how different strategies influence the few-shot learning performances.
null
null
10.18653/v1/2022.emnlp-main.307
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,437
inproceedings
kreiss-etal-2022-concadia
Concadia: Towards Image-Based Text Generation with a Purpose
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.308/
Kreiss, Elisa and Fang, Fei and Goodman, Noah and Potts, Christopher
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4667--4684
Current deep learning models often achieve excellent results on benchmark image-to-text datasets but fail to generate texts that are useful in practice. We argue that to close this gap, it is vital to distinguish descriptions from captions based on their distinct communicative roles. Descriptions focus on visual features and are meant to replace an image (often to increase accessibility), whereas captions appear alongside an image to supply additional information. To motivate this distinction and help people put it into practice, we introduce the publicly available Wikipedia-based dataset Concadia consisting of 96,918 images with corresponding English-language descriptions, captions, and surrounding context. Using insights from Concadia, models trained on it, and a preregistered human-subjects experiment with human- and model-generated texts, we characterize the commonalities and differences between descriptions and captions. In addition, we show that, for generating both descriptions and captions, it is useful to augment image-to-text models with representations of the textual context in which the image appeared.
null
null
10.18653/v1/2022.emnlp-main.308
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,438
inproceedings
kreiss-etal-2022-context
Context Matters for Image Descriptions for Accessibility: Challenges for Referenceless Evaluation Metrics
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.309/
Kreiss, Elisa and Bennett, Cynthia and Hooshmand, Shayan and Zelikman, Eric and Ringel Morris, Meredith and Potts, Christopher
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4685--4697
Few images on the Web receive alt-text descriptions that would make them accessible to blind and low vision (BLV) users. Image-based NLG systems have progressed to the point where they can begin to address this persistent societal problem, but these systems will not be fully successful unless we evaluate them on metrics that guide their development correctly. Here, we argue against current referenceless metrics {--} those that don`t rely on human-generated ground-truth descriptions {--} on the grounds that they do not align with the needs of BLV users. The fundamental shortcoming of these metrics is that they do not take context into account, whereas contextual information is highly valued by BLV users. To substantiate these claims, we present a study with BLV participants who rated descriptions along a variety of dimensions. An in-depth analysis reveals that the lack of context-awareness makes current referenceless metrics inadequate for advancing image accessibility. As a proof-of-concept, we provide a contextual version of the referenceless metric CLIPScore which begins to address the disconnect to the BLV data.
null
null
10.18653/v1/2022.emnlp-main.309
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,439
inproceedings
huang-etal-2022-metalogic
{M}eta{L}ogic: Logical Reasoning Explanations with Fine-Grained Structure
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.310/
Huang, Yinya and Zhang, Hongming and Hong, Ruixin and Liang, Xiaodan and Zhang, Changshui and Yu, Dong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4698--4724
In this paper, we propose a comprehensive benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios. Current explanation datasets often employ synthetic data with simple reasoning structures. Therefore, it cannot express more complex reasoning processes, such as the rebuttal to a reasoning step and the degree of certainty of the evidence. To this end, we propose a comprehensive logical reasoning explanation form. Based on the multi-hop chain of reasoning, the explanation form includes three main components: (1) The condition of rebuttal that the reasoning node can be challenged; (2) Logical formulae that uncover the internal texture of reasoning nodes; (3) Reasoning strength indicated by degrees of certainty. The fine-grained structure conforms to the real logical reasoning scenario, better fitting the human cognitive process but, simultaneously, is more challenging for the current models. We evaluate the current best models' performance on this new explanation form. The experimental results show that generating reasoning graphs remains a challenging task for current models, even with the help of giant pre-trained language models.
null
null
10.18653/v1/2022.emnlp-main.310
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,440
inproceedings
qian-dou-2022-explicit
Explicit Query Rewriting for Conversational Dense Retrieval
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.311/
Qian, Hongjin and Dou, Zhicheng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4725--4737
In a conversational search scenario, a query might be context-dependent because some words are referred to previous expressions or omitted. Previous works tackle the issue by either reformulating the query into a self-contained query (query rewriting) or learning a contextualized query embedding from the query context (context modelling). In this paper, we propose a model CRDR that can perform query rewriting and context modelling in a unified framework in which the query rewriting`s supervision signals further enhance the context modelling. Instead of generating a new query, CRDR only performs necessary modifications on the original query, which improves both accuracy and efficiency of query rewriting. In the meantime, the query rewriting benefits the context modelling by explicitly highlighting relevant terms in the query context, which improves the quality of the learned contextualized query embedding. To verify the effectiveness of CRDR, we perform comprehensive experiments on TREC CAsT-19 and TREC CAsT-20 datasets, and the results show that our method outperforms all baseline models in terms of both quality of query rewriting and quality of context-aware ranking.
null
null
10.18653/v1/2022.emnlp-main.311
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,441
inproceedings
yin-shang-2022-efficient
Efficient Nearest Neighbor Emotion Classification with {BERT}-whitening
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.312/
Yin, Wenbiao and Shang, Lin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4738--4745
Retrieval-based methods have been proven effective in many NLP tasks. Previous methods use representations from the pre-trained model for similarity search directly. However, the sentence representations from the pre-trained model like BERT perform poorly in retrieving semantically similar sentences, resulting in poor performance of the retrieval-based methods. In this paper, we propose kNN-EC, a simple and efficient non-parametric emotion classification (EC) method using nearest neighbor retrieval. We use BERT-whitening to get better sentence semantics, ensuring that nearest neighbor retrieval works. Meanwhile, BERT-whitening can also reduce memory storage of datastore and accelerate retrieval speed, solving the efficiency problem of the previous methods. kNN-EC average improves the pre-trained model by 1.17 F1-macro on two emotion classification datasets.
null
null
10.18653/v1/2022.emnlp-main.312
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,442
inproceedings
xia-etal-2022-fastclass
{F}ast{C}lass: A Time-Efficient Approach to Weakly-Supervised Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.313/
Xia, Tingyu and Wang, Yue and Tian, Yuan and Chang, Yi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4746--4758
Weakly-supervised text classification aims to train a classifier using only class descriptions and unlabeled data. Recent research shows that keyword-driven methods can achieve state-of-the-art performance on various tasks. However, these methods not only rely on carefully-crafted class descriptions to obtain class-specific keywords but also require substantial amount of unlabeled data and takes a long time to train. This paper proposes FastClass, an efficient weakly-supervised classification approach. It uses dense text representation to retrieve class-relevant documents from external unlabeled corpus and selects an optimal subset to train a classifier. Compared to keyword-driven methods, our approach is less reliant on initial class descriptions as it no longer needs to expand each class description into a set of class-specific keywords.Experiments on a wide range of classification tasks show that the proposed approach frequently outperforms keyword-driven models in terms of classification accuracy and often enjoys orders-of-magnitude faster training speed.
null
null
10.18653/v1/2022.emnlp-main.313
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,443
inproceedings
lin-etal-2022-neural
Neural-Symbolic Inference for Robust Autoregressive Graph Parsing via Compositional Uncertainty Quantification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.314/
Lin, Zi and Liu, Jeremiah and Shang, Jingbo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4759--4776
Pre-trained seq2seq models excel at graph semantic parsing with rich annotated data, but generalize worse to out-of-distribution (OOD) and long-tail examples. In comparison, symbolic parsers under-perform on population-level metrics, but exhibit unique strength in OOD and tail generalization. In this work, we study compositionality-aware approach to neural-symbolic inference informed by model confidence, performing fine-grained neural-symbolic reasoning at subgraph level (i.e., nodes and edges) and precisely targeting subgraph components with high uncertainty in the neural parser. As a result, the method combines the distinct strength of the neural and symbolic approaches in capturing different aspects of the graph prediction, leading to well-rounded generalization performance both across domains and in the tail. We empirically investigate the approach in the English Resource Grammar (ERG) parsing problem on a diverse suite of standard in-domain and seven OOD corpora. Our approach leads to 35.26{\%} and 35.60{\%} error reduction in aggregated SMATCH score over neural and symbolic approaches respectively, and 14{\%} absolute accuracy gain in key tail linguistic categories over the neural model, outperforming prior state-of-art methods that do not account for compositionality or uncertainty.
null
null
10.18653/v1/2022.emnlp-main.314
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,444
inproceedings
xia-etal-2022-speaker
A Speaker-Aware Co-Attention Framework for Medical Dialogue Information Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.315/
Xia, Yuan and Shi, Zhenhui and Zhou, Jingbo and Xu, Jiayu and Lu, Chao and Yang, Yehui and Wang, Lei and Huang, Haifeng and Zhang, Xia and Liu, Junwei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4777--4786
With the development of medical digitization, the extraction and structuring of Electronic Medical Records (EMRs) have become challenging but fundamental tasks. How to accurately and automatically extract structured information from medical dialogues is especially difficult because the information needs to be inferred from complex interactions between the doctor and the patient. To this end, in this paper, we propose a speaker-aware co-attention framework for medical dialogue information extraction. To better utilize the pre-trained language representation model to perceive the semantics of the utterance and the candidate item, we develop a speaker-aware dialogue encoder with multi-task learning, which considers the speaker`s identity into account. To deal with complex interactions between different utterances and the correlations between utterances and candidate items, we propose a co-attention fusion network to aggregate the utterance information. We evaluate our framework on the public medical dialogue extraction datasets to demonstrate the superiority of our method, which can outperform the state-of-the-art methods by a large margin. Codes will be publicly available upon acceptance.
null
null
10.18653/v1/2022.emnlp-main.315
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,445
inproceedings
wu-etal-2022-towards
Towards Interactivity and Interpretability: A Rationale-based Legal Judgment Prediction Framework
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.316/
Wu, Yiquan and Liu, Yifei and Lu, Weiming and Zhang, Yating and Feng, Jun and Sun, Changlong and Wu, Fei and Kuang, Kun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4787--4799
Legal judgment prediction (LJP) is a fundamental task in legal AI, which aims to assist the judge to hear the case and determine the judgment. The legal judgment usually consists of the law article, charge, and term of penalty. In the real trial scenario, the judge usually makes the decision step-by-step: first concludes the rationale according to the case`s facts and then determines the judgment. Recently, many models have been proposed and made tremendous progress in LJP, but most of them adopt an end-to-end manner that cannot be manually intervened by the judge for practical use. Moreover, existing models lack interpretability due to the neglect of rationale in the prediction process. Following the judge`s real trial logic, in this paper, we propose a novel Rationale-based Legal Judgment Prediction (RLJP) framework. In the RLJP framework, the LJP process is split into two steps. In the first phase, the model generates the rationales according to the fact description. Then it predicts the judgment based on the fact and the generated rationales. Extensive experiments on a real-world dataset show RLJP achieves the best results compared to the state-of-the-art models. Meanwhile, the proposed framework provides good interactivity and interpretability which enables practical use.
null
null
10.18653/v1/2022.emnlp-main.316
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,446
inproceedings
zhu-etal-2022-relclip
{R}el{CLIP}: Adapting Language-Image Pretraining for Visual Relationship Detection via Relational Contrastive Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.317/
Zhu, Yi and Zhu, Zhaoqing and Lin, Bingqian and Liang, Xiaodan and Zhao, Feng and Liu, Jianzhuang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4800--4810
Conventional visual relationship detection models only use the numeric ids of relation labels for training, but ignore the semantic correlation between the labels, which leads to severe training biases and harms the generalization ability of representations. In this paper, we introduce compact language information of relation labels for regularizing the representation learning of visual relations. Specifically, we propose a simple yet effective visual Relationship prediction framework that transfers natural language knowledge learned from Contrastive Language-Image Pre-training (CLIP) models to enhance the relationship prediction, termed RelCLIP. Benefiting from the powerful visual-semantic alignment ability of CLIP at image level, we introduce a novel Relational Contrastive Learning (RCL) approach which explores relation-level visual-semantic alignment via learning to match cross-modal relational embeddings. By collaboratively learning the semantic coherence and discrepancy from relation triplets, the model can generate more discriminative and robust representations. Experimental results on the Visual Genome dataset show that RelCLIP achieves significant improvements over strong baselines under full (provide accurate labels) and distant supervision (provide noise labels), demonstrating its powerful generalization ability in learning relationship representations. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/RelCLIP.
null
null
10.18653/v1/2022.emnlp-main.317
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,447
inproceedings
zheng-etal-2022-candidate
Candidate Soups: Fusing Candidate Results Improves Translation Quality for Non-Autoregressive Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.318/
Zheng, Huanran and Zhu, Wei and Wang, Pengfei and Wang, Xiaoling
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4811--4823
Non-autoregressive translation (NAT) model achieves a much faster inference speed than the autoregressive translation (AT) model because it can simultaneously predict all tokens during inference. However, its translation quality suffers from degradation compared to AT. And existing NAT methods only focus on improving the NAT model`s performance but do not fully utilize it. In this paper, we propose a simple but effective method called {\textquotedblleft}Candidate Soups,{\textquotedblright} which can obtain high-quality translations while maintaining the inference speed of NAT models. Unlike previous approaches that pick the individual result and discard the remainders, Candidate Soups (CDS) can fully use the valuable information in the different candidate translations through model uncertainty. Extensive experiments on two benchmarks (WMT`14 EN{--}DE and WMT`16 EN{--}RO) demonstrate the effectiveness and generality of our proposed method, which can significantly improve the translation quality of various base models. More notably, our best variant outperforms the AT model on three translation tasks with 7.6{\texttimes} speedup.
null
null
10.18653/v1/2022.emnlp-main.318
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,448
inproceedings
xu-etal-2022-evaluating
Evaluating Parameter Efficient Learning for Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.319/
Xu, Peng and Patwary, Mostofa and Prabhumoye, Shrimai and Adams, Virginia and Prenger, Ryan and Ping, Wei and Lee, Nayeon and Shoeybi, Mohammad and Catanzaro, Bryan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4824--4833
Parameter efficient learning methods (PERMs)have recently gained significant attention asthey provide an efficient way for pre-trainedlanguage models (PLMs) to adapt to a downstream task. However, these conclusions aremostly drawn from in-domain evaluations overthe full training set. In this paper, we presentcomparisons between PERMs and finetuningfrom three new perspectives: (1) the effect ofsample and model size to in-domain evaluations, (2) generalization to unseen domains andnew datasets, and (3) the faithfulness of generations. Our results show that for in-domainsettings (a) there is a cross point of samplesize for which PERMs will perform better thanfinetuning when training with fewer samples,and (b) larger PLMs have larger cross points.For cross-domain and cross-dataset cases, weshow that (a) Adapter (Houlsby et al., 2019)performs the best amongst all the PERMs studied here, and (b) it outperforms finetuning ifthe task dataset is below a certain size. Wealso compare the faithfulness of generationsand show that PERMs can achieve better faithfulness score than finetuning, especially forsmall training set, by as much as 6{\%}. Finally,we apply Adapter to MT-NLG 530b (Smithet al., 2022) and achieve new state-of-the-artresults on Xsum (Narayan et al., 2018) for allROUGE scores (ROUGE-1 49.17, ROUGE-227.20, ROUGE-L 40.98).
null
null
10.18653/v1/2022.emnlp-main.319
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,449
inproceedings
yuan-etal-2022-mcqueen
{M}c{Q}ueen: a Benchmark for Multimodal Conversational Query Rewrite
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.320/
Yuan, Yifei and Shi, Chen and Wang, Runze and Chen, Liyi and Jiang, Feijun and You, Yuan and Lam, Wai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4834--4844
The task of query rewrite aims to convert an in-context query to its fully-specified version where ellipsis and coreference are completed and referred-back according to the history context. Although much progress has been made, less efforts have been paid to real scenario conversations that involve drawing information from more than one modalities. In this paper, we propose the task of multimodal conversational query rewrite (McQR), which performs query rewrite under the multimodal visual conversation setting. We collect a large-scale dataset named McQueen based on manual annotation, which contains 15k visual conversations and over 80k queries where each one is associated with a fully-specified rewrite version. In addition, for entities appearing in the rewrite, we provide the corresponding image box annotation. We then use the McQueen dataset to benchmark a state-of-the-art method for effectively tackling the McQR task, which is based on a multimodal pre-trained model with pointer generator. Extensive experiments are performed to demonstrate the effectiveness of our model on this task.
null
null
10.18653/v1/2022.emnlp-main.320
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,450
inproceedings
han-shareghi-2022-self
Self-supervised Graph Masking Pre-training for Graph-to-Text Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.321/
Han, Jiuzhou and Shareghi, Ehsan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4845--4853
Large-scale pre-trained language models (PLMs) have advanced Graph-to-Text (G2T) generation by processing the linearised version of a graph. However, the linearisation is known to ignore the structural information. Additionally, PLMs are typically pre-trained on free text which introduces domain mismatch between pre-training and downstream G2T generation tasks. To address these shortcomings, we propose graph masking pre-training strategies that neither require supervision signals nor adjust the architecture of the underlying pre-trained encoder-decoder model. When used with a pre-trained T5, our approach achieves new state-of-the-art results on WebNLG+2020 and EventNarrative G2T generation datasets. Our method also shows to be very effective in the low-resource setting.
null
null
10.18653/v1/2022.emnlp-main.321
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,451
inproceedings
yang-ma-2022-improving
Improving Stability of Fine-Tuning Pretrained Language Models via Component-Wise Gradient Norm Clipping
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.322/
Yang, Chenghao and Ma, Xuezhe
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4854--4859
Fine-tuning over large pretrained language models (PLMs) has established many state-of-the-art results. Despite its superior performance, such fine-tuning can be unstable, resulting in significant variance in performance and potential risks for practical applications. Previous works have attributed such instability to the catastrophic forgetting problem in the top layers of PLMs, which indicates iteratively fine-tuning layers in a top-down manner is a promising solution. In this paper, we first point out that this method does not always work out due to the different convergence speeds of different layers/modules. Inspired by this observation, we propose a simple component-wise gradient norm clipping method to adjust the convergence speed for different components. Experiment results demonstrate that our method achieves consistent improvements in terms of generalization performance, convergence speed, and training stability. The codebase can be found at https://github.com/yangalan123/FineTuningStability.
null
null
10.18653/v1/2022.emnlp-main.322
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,452
inproceedings
mattern-etal-2022-differentially
Differentially Private Language Models for Secure Data Sharing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.323/
Mattern, Justus and Jin, Zhijing and Weggenmann, Benjamin and Schoelkopf, Bernhard and Sachan, Mrinmaya
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4860--4873
To protect the privacy of individuals whose data is being shared, it is of high importance to develop methods allowing researchers and companies to release textual data while providing formal privacy guarantees to its originators. In the field of NLP, substantial efforts have been directed at building mechanisms following the framework of local differential privacy, thereby anonymizing individual text samples before releasing them. In practice, these approaches are often dissatisfying in terms of the quality of their output language due to the strong noise required for local differential privacy. In this paper, we approach the problem at hand using global differential privacy, particularly by training a generative language model in a differentially private manner and consequently sampling data from it. Using natural language prompts and a new prompt-mismatch loss, we are able to create highly accurate and fluent textual datasets taking on specific desired attributes such as sentiment or topic and resembling statistical properties of the training data. We perform thorough experiments indicating that our synthetic datasets do not leak information from our original data and are of high language quality and highly suitable for training models for further analysis on real-world data. Notably, we also demonstrate that training classifiers on private synthetic data outperforms directly training classifiers with DP-SGD.
null
null
10.18653/v1/2022.emnlp-main.323
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,453
inproceedings
madaan-etal-2022-conditional
Conditional set generation using Seq2seq models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.324/
Madaan, Aman and Rajagopal, Dheeraj and Tandon, Niket and Yang, Yiming and Bosselut, Antoine
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4874--4896
Conditional set generation learns a mapping from an input sequence of tokens to a set. Several NLP tasks, such as entity typing and dialogue emotion tagging, are instances of set generation. Seq2Seq models are a popular choice to model set generation but they treat a set as a sequence and do not fully leverage its key properties, namely order-invariance and cardinality. We propose a novel algorithm for effectively sampling informative orders over the combinatorial space of label orders. Further, we jointly model the set cardinality and output by listing the set size as the first element and taking advantage of the autoregressive factorization used by Seq2Seq models. Our method is a model-independent data augmentation approach that endows any Seq2Seq model with the signals of order-invariance and cardinality. Training a Seq2Seq model on this new augmented data (without any additional annotations), gets an average relative improvement of 20{\%} for four benchmarks datasets across models spanning from BART-base, T5-11B, and GPT-3. We will release all code and data upon acceptance.
null
null
10.18653/v1/2022.emnlp-main.324
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,454
inproceedings
wang-etal-2022-analyzing
Analyzing and Evaluating Faithfulness in Dialogue Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.325/
Wang, Bin and Zhang, Chen and Zhang, Yan and Chen, Yiming and Li, Haizhou
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4897--4908
Dialogue summarization is abstractive in nature, making it suffer from factual errors. The factual correctness of summaries has the highest priority before practical applications. Many efforts have been made to improve faithfulness in text summarization. However, there is a lack of systematic study on dialogue summarization systems. In this work, we first perform the fine-grained human analysis on the faithfulness of dialogue summaries and observe that over 35{\%} of generated summaries are faithfully inconsistent respective the source dialogues. Furthermore, we present a new model-level faithfulness evaluation method. It examines generation models with multi-choice questions created by rule-based transformations. Experimental results show that our evaluation schema is a strong proxy for the factual correctness of summarization models. The human-annotated faithfulness samples and the evaluation toolkit are released to facilitate future research toward faithful dialogue summarization.
null
null
10.18653/v1/2022.emnlp-main.325
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,455
inproceedings
kasai-etal-2022-twist
Twist Decoding: Diverse Generators Guide Each Other
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.326/
Kasai, Jungo and Sakaguchi, Keisuke and Le Bras, Ronan and Peng, Hao and Lu, Ximing and Radev, Dragomir and Choi, Yejin and Smith, Noah A.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4909--4923
Many language generation models are now available for a wide range of generation tasks, including machine translation and summarization. Combining such diverse models may lead to further progress, but ensembling generation models is challenging during inference: conventional ensembling methods (e.g., shallow fusion) require that the models share vocabulary/tokenization schemes. We introduce Twist decoding, a simple and general text generation algorithm that benefits from diverse models at inference time. Our method does not assume the vocabulary, tokenization or even generation order is shared. Our extensive evaluations on machine translation and scientific paper summarization demonstrate that Twist decoding substantially outperforms each model decoded in isolation over various scenarios, including cases where domain-specific and general-purpose models are both available. Twist decoding also consistently outperforms the popular reranking heuristic where output candidates from one model are rescored by another. We hope that our work will encourage researchers and practitioners to examine generation models collectively, not just independently, and to seek out models with complementary strengths to the currently available models.
null
null
10.18653/v1/2022.emnlp-main.326
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,456
inproceedings
li-etal-2022-exploring-representation
Exploring Representation-level Augmentation for Code Search
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.327/
Li, Haochen and Miao, Chunyan and Leung, Cyril and Huang, Yanxian and Huang, Yuan and Zhang, Hongyu and Wang, Yanlin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4924--4936
Code search, which aims at retrieving the most relevant code fragment for a given natural language query, is a common activity in software development practice. Recently, contrastive learning is widely used in code search research, where many data augmentation approaches for source code (e.g., semantic-preserving program transformation) are proposed to learn better representations. However, these augmentations are at the raw-data level, which requires additional code analysis in the preprocessing stage and additional training cost in the training stage. In this paper, we explore augmentation methods that augment data (both code and query) at representation level which does not require additional data processing and training, and based on this we propose a general format of representation-level augmentation that unifies existing methods. Then, we propose three new augmentation methods (linear extrapolation, binary interpolation, and Gaussian scaling) based on the general format. Furthermore, we theoretically analyze the advantages of the proposed augmentation methods over traditional contrastive learning methods on code search. We experimentally evaluate the proposed representation-level augmentation methods with state-of-the-art code search models on a large-scale public dataset consisting of six programming languages. The experimental results show that our approach can consistently boost the performance of the studied code search models.
null
null
10.18653/v1/2022.emnlp-main.327
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,457
inproceedings
yu-etal-2022-learning
Learning Semantic Textual Similarity via Topic-informed Discrete Latent Variables
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.328/
Yu, Erxin and Du, Lan and Jin, Yuan and Wei, Zhepei and Chang, Yi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4937--4948
Recently, discrete latent variable models have received a surge of interest in both Natural Language Processing (NLP) and Computer Vision (CV), attributed to their comparable performance to the continuous counterparts in representation learning, while being more interpretable in their predictions. In this paper, we develop a topic-informed discrete latent variable model for semantic textual similarity, which learns a shared latent space for sentence-pair representation via vector quantization. Compared with previous models limited to local semantic contexts, our model can explore richer semantic information via topic modeling. We further boost the performance of semantic similarity by injecting the quantized representation into a transformer-based language model with a well-designed semantic-driven attention mechanism. We demonstrate, through extensive experiments across various English language datasets, that our model is able to surpass several strong neural baselines in semantic textual similarity tasks.
null
null
10.18653/v1/2022.emnlp-main.328
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,458
inproceedings
wang-etal-2022-strudel
{STRUDEL}: Structured Dialogue Summarization for Dialogue Comprehension
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.329/
Wang, Borui and Feng, Chengcheng and Nair, Arjun and Mao, Madelyn and Desai, Jai and Celikyilmaz, Asli and Li, Haoran and Mehdad, Yashar and Radev, Dragomir
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4949--4958
Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing, but no previous work has explored the possibility of whether abstractive dialogue summarization can also be used as a means to boost an NLP system`s performance on other important dialogue comprehension tasks. In this paper, we propose a novel type of dialogue summarization task - STRUctured DiaLoguE Summarization (STRUDEL) - that can help pre-trained language models to better understand dialogues and improve their performance on important dialogue comprehension tasks. In contrast to the holistic approach taken by the traditional free-form abstractive summarization task for dialogues, STRUDEL aims to decompose and imitate the hierarchical, systematic and structured mental process that we human beings usually go through when understanding and analyzing dialogues, and thus has the advantage of being more focused, specific and instructive for dialogue comprehension models to learn from. We further introduce a new STRUDEL dialogue comprehension modeling framework that integrates STRUDEL into a dialogue reasoning module over transformer encoder language models to improve their dialogue comprehension ability. In our empirical experiments on two important downstream dialogue comprehension tasks - dialogue question answering and dialogue response prediction - we demonstrate that our STRUDEL dialogue comprehension models can significantly improve the dialogue comprehension performance of transformer encoder language models.
null
null
10.18653/v1/2022.emnlp-main.329
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,459
inproceedings
zhang-etal-2022-competency
Competency-Aware Neural Machine Translation: Can Machine Translation Know its Own Translation Quality?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.330/
Zhang, Pei and Yang, Baosong and Wei, Hao-Ran and Liu, Dayiheng and Fan, Kai and Si, Luo and Xie, Jun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4959--4970
Neural machine translation (NMT) is often criticized for failures that happenwithout awareness. The lack of competency awareness makes NMT untrustworthy. This is in sharp contrast to human translators who give feedback or conduct further investigations whenever they are in doubt about predictions. To fill this gap, we propose a novel competency-aware NMT by extending conventional NMT with a self-estimator, offering abilities to translate a source sentence and estimate its competency.The self-estimator encodes the information of the decoding procedure and then examines whether it can reconstruct the original semantics of the source sentence. Experimental results on four translation tasks demonstrate that the proposed method not only carries out translation tasks intact but also delivers outstanding performance on quality estimation.Without depending on any reference or annotated data typically required by state-of-the-art metric and quality estimation methods, our model yields an even higher correlation with human quality judgments than a variety of aforementioned methods, such as BLEURT, COMET, and BERTScore. Quantitative and qualitative analyses show better robustness of competency awareness in our model.
null
null
10.18653/v1/2022.emnlp-main.330
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,460
inproceedings
gu-etal-2022-pasta
{PASTA}: Table-Operations Aware Fact Verification via Sentence-Table Cloze Pre-training
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.331/
Gu, Zihui and Fan, Ju and Tang, Nan and Nakov, Preslav and Zhao, Xiaoman and Du, Xiaoyong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4971--4983
Fact verification has attracted a lot of attention recently, e.g., in journalism, marketing, and policymaking, as misinformation and dis- information can sway one`s opinion and affect one`s actions. While fact-checking is a hard task in general, in many cases, false statements can be easily debunked based on analytics over tables with reliable information. Hence, table- based fact verification has recently emerged as an important and growing research area. Yet, progress has been limited due to the lack of datasets that can be used to pre-train language models (LMs) to be aware of common table operations, such as aggregating a column or comparing tuples. To bridge this gap, this paper introduces PASTA for table-based fact verification via pre-training with synthesized sentence{--}table cloze questions. In particular, we design six types of common sentence{--}table cloze tasks, including Filter, Aggregation, Superlative, Comparative, Ordinal, and Unique, based on which we synthesize a large corpus consisting of 1.2 million sentence{--}table pairs from WikiTables. PASTA uses a recent pre-trained LM, DeBERTaV3, and further pre- trains it on our corpus. Our experimental results show that PASTA achieves new state-of-the-art (SOTA) performance on two table-based fact verification datasets TabFact and SEM-TAB- FACTS. In particular, on the complex set of TabFact, which contains multiple operations, PASTA largely outperforms previous SOTA by 4.7{\%} (85.6{\%} vs. 80.9{\%}), and the gap between PASTA and human performance on the small test set is narrowed to just 1.5{\%} (90.6{\%} vs. 92.1{\%}).
null
null
10.18653/v1/2022.emnlp-main.331
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,461
inproceedings
fan-etal-2022-sentiment
Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.332/
Fan, Shuai and Lin, Chen and Li, Haonan and Lin, Zhenghao and Su, Jinsong and Zhang, Hang and Gong, Yeyun and Guo, JIan and Duan, Nan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4984--4994
Most existing pre-trained language representation models (PLMs) are sub-optimal in sentiment analysis tasks, as they capture the sentiment information from word-level while under-considering sentence-level information. In this paper, we propose SentiWSP, a novel Sentiment-aware pre-trained language model with combined Word-level and Sentence-level Pre-training tasks.The word level pre-training task detects replaced sentiment words, via a generator-discriminator framework, to enhance the PLM`s knowledge about sentiment words.The sentence level pre-training task further strengthens the discriminator via a contrastive learning framework, with similar sentences as negative samples, to encode sentiments in a sentence.Extensive experimental results show that SentiWSP achieves new state-of-the-art performance on various sentence-level and aspect-level sentiment classification benchmarks. We have made our code and model publicly available at https://github.com/XMUDM/SentiWSP.
null
null
10.18653/v1/2022.emnlp-main.332
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,462
inproceedings
liu-etal-2022-towards-multi-modal
Towards Multi-Modal Sarcasm Detection via Hierarchical Congruity Modeling with Knowledge Enhancement
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.333/
Liu, Hui and Wang, Wenya and Li, Haoliang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
4995--5006
Sarcasm is a linguistic phenomenon indicating a discrepancy between literal meanings and implied intentions. Due to its sophisticated nature, it is usually difficult to be detected from the text itself. As a result, multi-modal sarcasm detection has received more and more attention in both academia and industries. However, most existing techniques only modeled the atomic-level inconsistencies between the text input and its accompanying image, ignoring more complex compositions for both modalities. Moreover, they neglected the rich information contained in external knowledge, e.g., image captions. In this paper, we propose a novel hierarchical framework for sarcasm detection by exploring both the atomic-level congruity based on multi-head cross attentions and the composition-level congruity based on graph neural networks, where a post with low congruity can be identified as sarcasm. In addition, we exploit the effect of various knowledge resources for sarcasm detection. Evaluation results on a public multi-modal sarcasm detection dataset based on Twitter demonstrate the superiority of our proposed model.
null
null
10.18653/v1/2022.emnlp-main.333
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,463
inproceedings
zhou-etal-2022-efficiently
Efficiently Tuned Parameters Are Task Embeddings
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.334/
Zhou, Wangchunshu and Xu, Canwen and McAuley, Julian
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5007--5014
Intermediate-task transfer can benefit a wide range of NLP tasks with properly selected source datasets. However, it is computationally infeasible to experiment with all intermediate transfer combinations, making choosing a useful source task a challenging problem. In this paper, we anticipate that task-specific parameters updated in parameter-efficient tuning methods are likely to encode task-specific information. Therefore, such parameters can be predictive for inter-task transferability. Thus, we propose to exploit these efficiently tuned parameters as off-the-shelf task embeddings for the efficient selection of source datasets for intermediate-task transfer. We experiment with 11 text classification tasks and 11 question answering tasks. Experimental results show that our approach consistently outperforms existing inter-task transferability prediction methods while being conceptually simple and computationally efficient. Our analysis also reveals that the ability of efficiently tuned parameters on transferability prediction is disentangled with their in-task performance. This allows us to use parameters from early checkpoints as task embeddings to further improve efficiency.
null
null
10.18653/v1/2022.emnlp-main.334
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,464
inproceedings
peng-etal-2022-copen
{COPEN}: Probing Conceptual Knowledge in Pre-trained Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.335/
Peng, Hao and Wang, Xiaozhi and Hu, Shengding and Jin, Hailong and Hou, Lei and Li, Juanzi and Liu, Zhiyuan and Liu, Qun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5015--5035
Conceptual knowledge is fundamental to human cognition and knowledge bases. However, existing knowledge probing works only focus on evaluating factual knowledge of pre-trained language models (PLMs) and ignore conceptual knowledge. Since conceptual knowledge often appears as implicit commonsense behind texts, designing probes for conceptual knowledge is hard. Inspired by knowledge representation schemata, we comprehensively evaluate conceptual knowledge of PLMs by designing three tasks to probe whether PLMs organize entities by conceptual similarities, learn conceptual properties, and conceptualize entities in contexts, respectively. For the tasks, we collect and annotate 24k data instances covering 393 concepts, which is COPEN, a COnceptual knowledge Probing bENchmark. Extensive experiments on different sizes and types of PLMs show that existing PLMs systematically lack conceptual knowledge and suffer from various spurious correlations. We believe this is a critical bottleneck for realizing human-like cognition in PLMs. COPEN and our codes are publicly released at https://github.com/THU-KEG/COPEN.
null
null
10.18653/v1/2022.emnlp-main.335
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,465
inproceedings
nie-etal-2022-capturing
Capturing Global Structural Information in Long Document Question Answering with Compressive Graph Selector Network
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.336/
Nie, Yuxiang and Huang, Heyan and Wei, Wei and Mao, Xian-Ling
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5036--5047
Long document question answering is a challenging task due to its demands for complex reasoning over long text. Previous works usually take long documents as non-structured flat texts or only consider the local structure in long documents. However, these methods usually ignore the global structure of the long document, which is essential for long-range understanding. To tackle this problem, we propose Compressive Graph Selector Network (CGSN) to capture the global structure in a compressive and iterative manner. The proposed model mainly focuses on the evidence selection phase of long document question answering. Specifically, it consists of three modules: local graph network, global graph network and evidence memory network. Firstly, the local graph network builds the graph structure of the chunked segment in token, sentence, paragraph and segment levels to capture the short-term dependency of the text. Secondly, the global graph network selectively receives the information of each level from the local graph, compresses them into the global graph nodes and applies graph attention to the global graph nodes to build the long-range reasoning over the entire text in an iterative way. Thirdly, the evidence memory network is designed to alleviate the redundancy problem in the evidence selection by saving the selected result in the previous steps. Extensive experiments show that the proposed model outperforms previous methods on two datasets.
null
null
10.18653/v1/2022.emnlp-main.336
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,466
inproceedings
yao-koller-2022-structural
Structural generalization is hard for sequence-to-sequence models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.337/
Yao, Yuekun and Koller, Alexander
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5048--5062
Sequence-to-sequence (seq2seq) models have been successful across many NLP tasks,including ones that require predicting linguistic structure. However, recent work on compositional generalization has shown that seq2seq models achieve very low accuracy in generalizing to linguistic structures that were not seen in training. We present new evidence that this is a general limitation of seq2seq models that is present not just in semantic parsing, but also in syntactic parsing and in text-to-text tasks, and that this limitation can often be overcome by neurosymbolic models that have linguistic knowledge built in. We further report on some experiments that give initial answers on the reasons for these limitations.
null
null
10.18653/v1/2022.emnlp-main.337
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,467
inproceedings
liu-etal-2022-contrastive
Contrastive Learning enhanced Author-Style Headline Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.338/
Liu, Hui and Guo, Weidong and Chen, Yige and Li, Xiangyang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5063--5072
Headline generation is a task of generating an appropriate headline for a given article, which can be further used for machine-aided writing or enhancing the click-through ratio. Current works only use the article itself in the generation, but have not taken the writing style of headlines into consideration. In this paper, we propose a novel Seq2Seq model called CLH3G (Contrastive Learning enhanced Historical Headlines based Headline Generation) which can use the historical headlines of the articles that the author wrote in the past to improve the headline generation of current articles. By taking historical headlines into account, we can integrate the stylistic features of the author into our model, and generate a headline not only appropriate for the article, but also consistent with the author`s style. In order to efficiently learn the stylistic features of the author, we further introduce a contrastive learning based auxiliary task for the encoder of our model. Besides, we propose two methods to use the learned stylistic features to guide both the pointer and the decoder during the generation. Experimental results show that historical headlines of the same user can improve the headline generation significantly, and both the contrastive learning module and the two style features fusion methods can further boost the performance.
null
null
10.18653/v1/2022.emnlp-main.338
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,468
inproceedings
li-etal-2022-multi-granularity
Multi-Granularity Optimization for Non-Autoregressive Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.339/
Li, Yafu and Cui, Leyang and Yin, Yongjing and Zhang, Yue
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5073--5084
Despite low latency, non-autoregressive machine translation (NAT) suffers severe performance deterioration due to the naive independence assumption. This assumption is further strengthened by cross-entropy loss, which encourages a strict match between the hypothesis and the reference token by token. To alleviate this issue, we propose multi-granularity optimization for NAT, which collects model behaviours on translation segments of various granularities and integrates feedback for backpropagation. Experiments on four WMT benchmarks show that the proposed method significantly outperforms the baseline models trained with cross-entropy loss, and achieves the best performance on WMT`16 En{\ensuremath{\Leftrightarrow}}Ro and highly competitive results on WMT`14 En{\ensuremath{\Leftrightarrow}}De for fully non-autoregressive translation.
null
null
10.18653/v1/2022.emnlp-main.339
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,469
inproceedings
wang-etal-2022-super
Super-{N}atural{I}nstructions: Generalization via Declarative Instructions on 1600+ {NLP} Tasks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.340/
Wang, Yizhong and Mishra, Swaroop and Alipoormolabashi, Pegah and Kordi, Yeganeh and Mirzaei, Amirreza and Naik, Atharva and Ashok, Arjun and Dhanasekaran, Arut Selvan and Arunkumar, Anjana and Stap, David and Pathak, Eshaan and Karamanolakis, Giannis and Lai, Haizhi and Purohit, Ishan and Mondal, Ishani and Anderson, Jacob and Kuznia, Kirby and Doshi, Krima and Pal, Kuntal Kumar and Patel, Maitreya and Moradshahi, Mehrad and Parmar, Mihir and Purohit, Mirali and Varshney, Neeraj and Kaza, Phani Rohitha and Verma, Pulkit and Puri, Ravsehaj Singh and Karia, Rushang and Doshi, Savan and Sampat, Shailaja Keyur and Mishra, Siddhartha and Reddy A, Sujan and Patro, Sumanta and Dixit, Tanay and Shen, Xudong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5085--5109
How well can NLP models generalize to a variety of unseen tasks when provided with task instructions? To address this question, we first introduce Super-NaturalInstructions, a benchmark of 1,616 diverse NLP tasks and their expert-written instructions. Our collection covers 76 distinct task types, including but not limited to classification, extraction, infilling, sequence tagging, text rewriting, and text composition. This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions{---}training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones.Furthermore, we build Tk-Instruct, a transformer model trained to follow a variety of in-context instructions (plain language task definitions or k-shot examples). Our experiments show that Tk-Instruct outperforms existing instruction-following models such as InstructGPT by over 9{\%} on our benchmark despite being an order of magnitude smaller. We further analyze generalization as a function of various scaling parameters, such as the number of observed tasks, the number of instances per task, and model sizes. We hope our dataset and model facilitate future progress towards more general-purpose NLP models.
null
null
10.18653/v1/2022.emnlp-main.340
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,470
inproceedings
liu-etal-2022-metafill
{M}eta{F}ill: Text Infilling for Meta-Path Generation on Heterogeneous Information Networks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.341/
Liu, Zequn and Duan, Kefei and Yang, Junwei and Xu, Hanwen and Zhang, Ming and Wang, Sheng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5110--5122
Heterogeneous information network (HIN) is essential to study complicated networks containing multiple edge types and node types. Meta-path, a sequence of node types and edge types, is the core technique to embed HINs. Since manually curating meta-paths is time-consuming, there is a pressing need to develop automated meta-path generation approaches. Existing meta-path generation approaches cannot fully exploit the rich textual information in HINs, such as node names and edge type names. To address this problem, we propose MetaFill, a text-infilling-based approach for meta-path generation. The key idea of MetaFill is to formulate meta-path identification problem as a word sequence infilling problem, which can be advanced by pretrained language models (PLMs). We observed the superior performance of MetaFill against existing meta-path generation methods and graph embedding methods that do not leverage meta-paths in both link prediction and node classification on two real-world HIN datasets. We further demonstrated how MetaFill can accurately classify edges in the zero-shot setting, where existing approaches cannot generate any meta-paths. MetaFill exploits PLMs to generate meta-paths for graph embedding, opening up new avenues for language model applications in graph analysis.
null
null
10.18653/v1/2022.emnlp-main.341
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,471
inproceedings
zhang-etal-2022-drlk
{DRLK}: Dynamic Hierarchical Reasoning with Language Model and Knowledge Graph for Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.342/
Zhang, Miao and Dai, Rufeng and Dong, Ming and He, Tingting
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5123--5133
In recent years, Graph Neural Network (GNN) approaches with enhanced knowledge graphs (KG) perform well in question answering (QA) tasks. One critical challenge is how to effectively utilize interactions between the QA context and KG. However, existing work only adopts the identical QA context representation to interact with multiple layers of KG, which results in a restricted interaction. In this paper, we propose DRLK (Dynamic Hierarchical Reasoning with Language Model and Knowledge Graphs), a novel model that utilizes dynamic hierarchical interactions between the QA context and KG for reasoning. DRLK extracts dynamic hierarchical features in the QA context, and performs inter-layer and intra-layer interactions on each iteration, allowing the KG representation to be grounded with the hierarchical features of the QA context. We conduct extensive experiments on four benchmark datasets in medical QA and commonsense reasoning. The experimental results demonstrate that DRLK achieves state-of-the-art performances on two benchmark datasets and performs competitively on the others.
null
null
10.18653/v1/2022.emnlp-main.342
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,472
inproceedings
bao-etal-2022-aeg
{AEG}: Argumentative Essay Generation via A Dual-Decoder Model with Content Planning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.343/
Bao, Jianzhu and Wang, Yasheng and Li, Yitong and Mi, Fei and Xu, Ruifeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5134--5148
Argument generation is an important but challenging task in computational argumentation.Existing studies have mainly focused on generating individual short arguments, while research on generating long and coherent argumentative essays is still under-explored.In this paper, we propose a new task, Argumentative Essay Generation (AEG).Given a writing prompt, the goal of AEG is to automatically generate an argumentative essay with strong persuasiveness.We construct a large-scale dataset, ArgEssay, for this new task and establish a strong model based on a dual-decoder Transformer architecture.Our proposed model contains two decoders, a planning decoder (PD) and a writing decoder (WD), where PD is used to generate a sequence for essay content planning and WD incorporates the planning information to write an essay.Further, we pre-train this model on a large news dataset to enhance the plan-and-write paradigm.Automatic and human evaluation results show that our model can generate more coherent and persuasive essays with higher diversity and less repetition compared to several baselines.
null
null
10.18653/v1/2022.emnlp-main.343
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,473
inproceedings
kim-etal-2022-botstalk
{B}ots{T}alk: Machine-sourced Framework for Automatic Curation of Large-scale Multi-skill Dialogue Datasets
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.344/
Kim, Minju and Kim, Chaehyeong and Song, Yong Ho and Hwang, Seung-won and Yeo, Jinyoung
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5149--5170
To build open-domain chatbots that are able to use diverse communicative skills, we propose a novel framework BotsTalk, where multiple agents grounded to the specific target skills participate in a conversation to automatically annotate multi-skill dialogues. We further present Blended Skill BotsTalk (BSBT), a large-scale multi-skill dialogue dataset comprising 300K conversations. Through extensive experiments, we demonstrate that our dataset can be effective for multi-skill dialogue systems which require an understanding of skill blending as well as skill grounding. Our code and data are available at https://github.com/convei-lab/BotsTalk.
null
null
10.18653/v1/2022.emnlp-main.344
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,474
inproceedings
ma-etal-2022-wider
Wider {\&} Closer: Mixture of Short-channel Distillers for Zero-shot Cross-lingual Named Entity Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.345/
Ma, Jun-Yu and Chen, Beiduo and Gu, Jia-Chen and Ling, Zhenhua and Guo, Wu and Liu, Quan and Chen, Zhigang and Liu, Cong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5171--5183
Zero-shot cross-lingual named entity recognition (NER) aims at transferring knowledge from annotated and rich-resource data in source languages to unlabeled and lean-resource data in target languages. Existing mainstream methods based on the teacher-student distillation framework ignore the rich and complementary information lying in the intermediate layers of pre-trained language models, and domain-invariant information is easily lost during transfer. In this study, a mixture of short-channel distillers (MSD) method is proposed to fully interact the rich hierarchical information in the teacher model and to transfer knowledge to the student model sufficiently and efficiently. Concretely, a multi-channel distillation framework is designed for sufficient information transfer by aggregating multiple distillers as a mixture. Besides, an unsupervised method adopting parallel domain adaptation is proposed to shorten the channels between the teacher and student models to preserve domain-invariant features. Experiments on four datasets across nine languages demonstrate that the proposed method achieves new state-of-the-art performance on zero-shot cross-lingual NER and shows great generalization and compatibility across languages and fields.
null
null
10.18653/v1/2022.emnlp-main.345
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,475
inproceedings
wu-etal-2022-efficient
An Efficient Memory-Augmented Transformer for Knowledge-Intensive {NLP} Tasks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.346/
Wu, Yuxiang and Zhao, Yu and Hu, Baotian and Minervini, Pasquale and Stenetorp, Pontus and Riedel, Sebastian
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5184--5196
Access to external knowledge is essential for many natural language processing tasks, such as question answering and dialogue. Existing methods often rely on a parametric model that stores knowledge in its parameters, or use a retrieval-augmented model that has access to an external knowledge source. Parametric and retrieval-augmented models have complementary strengths in terms of computational efficiency and predictive accuracy. To combine the strength of both approaches, we propose the Efficient Memory-Augmented Transformer (EMAT) {--} it encodes external knowledge into a key-value memory and exploits the fast maximum inner product search for memory querying. We also introduce pre-training tasks that allow EMAT to encode informative key-value representations, and to learn an implicit strategy to integrate multiple memory slots into the transformer. Experiments on various knowledge-intensive tasks such as question answering and dialogue datasets show that, simply augmenting parametric models (T5-base) using our method produces more accurate results (e.g., 25.8 {\textrightarrow} 44.3 EM on NQ) while retaining a high throughput (e.g., 1000 queries/s on NQ). Compared to retrieval-augmented models, EMAT runs substantially faster across the board and produces more accurate results on WoW and ELI5.
null
null
10.18653/v1/2022.emnlp-main.346
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,476
inproceedings
song-etal-2022-supervised
Supervised Prototypical Contrastive Learning for Emotion Recognition in Conversation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.347/
Song, Xiaohui and Huang, Longtao and Xue, Hui and Hu, Songlin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5197--5206
Capturing emotions within a conversation plays an essential role in modern dialogue systems. However, the weak correlation between emotions and semantics brings many challenges to emotion recognition in conversation (ERC). Even semantically similar utterances, the emotion may vary drastically depending on contexts or speakers. In this paper, we propose a Supervised Prototypical Contrastive Learning (SPCL) loss for the ERC task. Leveraging the Prototypical Network, the SPCL targets at solving the imbalanced classification problem through contrastive learning and does not require a large batch size. Meanwhile, we design a difficulty measure function based on the distance between classes and introduce curriculum learning to alleviate the impact of extreme samples. We achieve state-of-the-art results on three widely used benchmarks. Further, we conduct analytical experiments to demonstrate the effectiveness of our proposed SPCL and curriculum learning strategy.
null
null
10.18653/v1/2022.emnlp-main.347
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,477
inproceedings
mikhailov-etal-2022-rucola
{R}u{C}o{LA}: {R}ussian Corpus of Linguistic Acceptability
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.348/
Mikhailov, Vladislav and Shamardina, Tatiana and Ryabinin, Max and Pestova, Alena and Smurov, Ivan and Artemova, Ekaterina
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5207--5227
Linguistic acceptability (LA) attracts the attention of the research community due to its many uses, such as testing the grammatical knowledge of language models and filtering implausible texts with acceptability classifiers.However, the application scope of LA in languages other than English is limited due to the lack of high-quality resources.To this end, we introduce the Russian Corpus of Linguistic Acceptability (RuCoLA), built from the ground up under the well-established binary LA approach. RuCoLA consists of 9.8k in-domain sentences from linguistic publications and 3.6k out-of-domain sentences produced by generative models. The out-of-domain set is created to facilitate the practical use of acceptability for improving language generation.Our paper describes the data collection protocol and presents a fine-grained analysis of acceptability classification experiments with a range of baseline approaches.In particular, we demonstrate that the most widely used language models still fall behind humans by a large margin, especially when detecting morphological and semantic errors. We release RuCoLA, the code of experiments, and a public leaderboard to assess the linguistic competence of language models for Russian.
null
null
10.18653/v1/2022.emnlp-main.348
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,478
inproceedings
xiao-etal-2022-complex
Complex Hyperbolic Knowledge Graph Embeddings with Fast {F}ourier Transform
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.349/
Xiao, Huiru and Liu, Xin and Song, Yangqiu and Wong, Ginny and See, Simon
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5228--5239
The choice of geometric space for knowledge graph (KG) embeddings can have significant effects on the performance of KG completion tasks. The hyperbolic geometry has been shown to capture the hierarchical patterns due to its tree-like metrics, which addressed the limitations of the Euclidean embedding models. Recent explorations of the complex hyperbolic geometry further improved the hyperbolic embeddings for capturing a variety of hierarchical structures. However, the performance of the hyperbolic KG embedding models for non-transitive relations is still unpromising, while the complex hyperbolic embeddings do not deal with multi-relations. This paper aims to utilize the representation capacity of the complex hyperbolic geometry in multi-relational KG embeddings. To apply the geometric transformations which account for different relations and the attention mechanism in the complex hyperbolic space, we propose to use the fast Fourier transform (FFT) as the conversion between the real and complex hyperbolic space. Constructing the attention-based transformations in the complex space is very challenging, while the proposed Fourier transform-based complex hyperbolic approaches provide a simple and effective solution. Experimental results show that our methods outperform the baselines, including the Euclidean and the real hyperbolic embedding models.
null
null
10.18653/v1/2022.emnlp-main.349
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,479
inproceedings
dou-etal-2022-towards
Towards Knowledge-Intensive Text-to-{SQL} Semantic Parsing with Formulaic Knowledge
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.350/
Dou, Longxu and Gao, Yan and Liu, Xuqi and Pan, Mingyang and Wang, Dingzirui and Che, Wanxiang and Zhan, Dechen and Kan, Min-Yen and Lou, Jian-Guang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5240--5253
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by representing formulaic knowledge rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2{\%} improvement overall on KnowSQL.
null
null
10.18653/v1/2022.emnlp-main.350
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,480
inproceedings
sogaard-2022-ban
Should We Ban {E}nglish {NLP} for a Year?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.351/
S{\o}gaard, Anders
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5254--5260
Around two thirds of NLP research at top venues is devoted exclusively to developing technology for speakers of English, most speech data comes from young urban speakers, and most texts used to train language models come from male writers. These biases feed into consumer technologies to widen existing inequality gaps, not only within, but also across, societies. Many have argued that it is almost impossible to mitigate inequality amplification. I argue that, on the contrary, it is quite simple to do so, and that counter-measures would have little-to-no negative impact, except for, perhaps, in the very short term.
null
null
10.18653/v1/2022.emnlp-main.351
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,481
inproceedings
lee-etal-2022-littlebird
{L}ittle{B}ird: Efficient Faster {\&} Longer Transformer for Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.352/
Lee, Minchul and Han, Kijong and Shin, Myeong Cheol
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5261--5277
BERT has shown a lot of sucess in a wide variety of NLP tasks. But it has a limitation dealing with long inputs due to its attention mechanism. Longformer, ETC and BigBird addressed this issue and effectively solved the quadratic dependency problem.However we find that these models are not sufficient, and propose LittleBird, a novel model based on BigBird with improved speed and memory footprint while maintaining accuracy.In particular, we devise a more flexible and efficient position representation method based on Attention with Linear Biases(ALiBi). We also show that replacing the method of global information represented in the BigBird with pack and unpack attention is more effective.The proposed model can work on long inputs even after being pre-trained on short inputs, and can be trained efficiently reusing existing pre-trained language model for short inputs. This is a significant benefit for low-resource languages where large amounts of long text data are difficult to obtain.As a result, our experiments show that LittleBird works very well in a variety of languages, achieving high performance in question answering tasks, particularly in KorQuAD2.0, Korean Question Answering Dataset for long paragraphs.
null
null
10.18653/v1/2022.emnlp-main.352
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,482
inproceedings
yang-etal-2022-wets
{W}e{TS}: A Benchmark for Translation Suggestion
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.353/
Yang, Zhen and Meng, Fandong and Zhang, Yingxue and Li, Ernan and Zhou, Jie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5278--5290
Translation suggestion (TS), which provides alternatives for specific words or phrases given the entire documents generated by machine translation (MT), has been proven to play a significant role in post-editing (PE). There are two main pitfalls for existing researches in this line. First, most conventional works only focus on the overall performance of PE but ignore the exact performance of TS, which makes the progress of PE sluggish and less explainable; Second, as no publicly available golden dataset exists to support in-depth research for TS, almost all of the previous works conduct experiments on their in-house datasets or the noisy datasets built automatically, which makes their experiments hard to be reproduced and compared. To break these limitations mentioned above and spur the research in TS, we create a benchmark dataset, called \textit{WeTS}, which is a golden corpus annotated by expert translators on four translation directions. Apart from the golden corpus, we also propose several methods to generate synthetic corpora which can be used to improve the performance substantially through pre-training. As for the model, we propose the segment-aware self-attention based Transformer for TS. Experimental results show that our approach achieves the best results on all four directions, including English-to-German, German-to-English, Chinese-to-English, and English-to-Chinese.
null
null
10.18653/v1/2022.emnlp-main.353
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,483
inproceedings
wang-etal-2022-discrete
Discrete Cross-Modal Alignment Enables Zero-Shot Speech Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.354/
Wang, Chen and Liu, Yuchen and Chen, Boxing and Zhang, Jiajun and Luo, Wei and Huang, Zhongqiang and Zong, Chengqing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5291--5302
End-to-end Speech Translation (ST) aims at translating the source language speech into target language text without generating the intermediate transcriptions. However, the training of end-to-end methods relies on parallel ST data, which are difficult and expensive to obtain. Fortunately, the supervised data for automatic speech recognition (ASR) and machine translation (MT) are usually more accessible, making zero-shot speech translation a potential direction. Existing zero-shot methods fail to align the two modalities of speech and text into a shared semantic space, resulting in much worse performance compared to the supervised ST methods. In order to enable zero-shot ST, we propose a novel Discrete Cross-Modal Alignment (DCMA) method that employs a shared discrete vocabulary space to accommodate and match both modalities of speech and text. Specifically, we introduce a vector quantization module to discretize the continuous representations of speech and text into a finite set of virtual tokens, and use ASR data to map corresponding speech and text to the same virtual token in a shared codebook. This way, source language speech can be embedded in the same semantic space as the source language text, which can be then transformed into target language text with an MT module. Experiments on multiple language pairs demonstrate that our zero-shot ST method significantly improves the SOTA, and even performers on par with the strong supervised ST baselines.
null
null
10.18653/v1/2022.emnlp-main.354
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,484
inproceedings
qiu-cohen-2022-abstractive
Abstractive Summarization Guided by Latent Hierarchical Document Structure
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.355/
Qiu, Yifu and Cohen, Shay B.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5303--5317
Sequential abstractive neural summarizers often do not use the underlying structure in the input article or dependencies between the input sentences. This structure is essential to integrate and consolidate information from different parts of the text. To address this shortcoming, we propose a hierarchy-aware graph neural network (HierGNN) which captures such dependencies through three main steps: 1) learning a hierarchical document structure through a latent structure tree learned by a sparse matrix-tree computation; 2) propagating sentence information over this structure using a novel message-passing node propagation mechanism to identify salient information; 3) using graph-level attention to concentrate the decoder on salient information. Experiments confirm HierGNN improves strong sequence models such as BART, with a 0.55 and 0.75 margin in average ROUGE-1/2/L for CNN/DM and XSum. Further human evaluation demonstrates that summaries produced by our model are more relevant and less redundant than the baselines, into which HierGNN is incorporated. We also find HierGNN synthesizes summaries by fusing multiple source sentences more, rather than compressing a single source sentence, and that it processes long inputs more effectively.
null
null
10.18653/v1/2022.emnlp-main.355
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,485
inproceedings
mao-etal-2022-explainable
Explainable Question Answering based on Semantic Graph by Global Differentiable Learning and Dynamic Adaptive Reasoning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.356/
Mao, Jianguo and Jiang, Wenbin and Wang, Xiangdong and Liu, Hong and Xia, Yu and Lyu, Yajuan and She, QiaoQiao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5318--5325
Multi-hop Question Answering is an agent task for testing the reasoning ability. With the development of pre-trained models, the implicit reasoning ability has been surprisingly improved and can even surpass human performance. However, the nature of the black box hinders the construction of explainable intelligent systems. Several researchers have explored explainable neural-symbolic reasoning methods based on question decomposition techniques. The undifferentiable symbolic operations and the error propagation in the reasoning process lead to poor performance. To alleviate it, we propose a simple yet effective Global Differentiable Learning strategy to explore optimal reasoning paths from the latent probability space so that the model learns to solve intermediate reasoning processes without expert annotations. We further design a Dynamic Adaptive Reasoner to enhance the generalization of unseen questions. Our method achieves 17{\%} improvements in F1-score against BreakRC and shows better interpretability. We take a step forward in building interpretable reasoning methods.
null
null
10.18653/v1/2022.emnlp-main.356
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,486