entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
qiu-etal-2022-dureader
{D}u{R}eader-Retrieval: A Large-scale {C}hinese Benchmark for Passage Retrieval from Web Search Engine
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.357/
Qiu, Yifu and Li, Hongyu and Qu, Yingqi and Chen, Ying and She, QiaoQiao and Liu, Jing and Wu, Hua and Wang, Haifeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5326--5338
In this paper, we present DuReader-retrieval, a large-scale Chinese dataset for passage retrieval. DuReader-retrieval contains more than 90K queries and over 8M unique passages from a commercial search engine. To alleviate the shortcomings of other datasets and ensure the quality of our benchmark, we (1) reduce the false negatives in development and test sets by manually annotating results pooled from multiple retrievers, and (2) remove the training queries that are semantically similar to the development and testing queries. Additionally, we provide two out-of-domain testing sets for cross-domain evaluation, as well as a set of human translated queries for for cross-lingual retrieval evaluation. The experiments demonstrate that DuReader-retrieval is challenging and a number of problems remain unsolved, such as the salient phrase mismatch and the syntactic mismatch between queries and paragraphs. These experiments also show that dense retrievers do not generalize well across domains, and cross-lingual retrieval is essentially challenging. DuReader-retrieval is publicly available at https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval.
null
null
10.18653/v1/2022.emnlp-main.357
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,487
inproceedings
liu-etal-2022-pair
Pair-Based Joint Encoding with Relational Graph Convolutional Networks for Emotion-Cause Pair Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.358/
Liu, Junlong and Shang, Xichen and Ma, Qianli
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5339--5351
Emotion-cause pair extraction (ECPE) aims to extract emotion clauses and corresponding cause clauses, which have recently received growing attention. Previous methods sequentially encode features with a specified order. They first encode the emotion and cause features for clause extraction and then combine them for pair extraction. This lead to an imbalance in inter-task feature interaction where features extracted later have no direct contact with the former. To address this issue, we propose a novel **P**air-**B**ased **J**oint **E**ncoding (**PBJE**) network, which generates pairs and clauses features simultaneously in a joint feature encoding manner to model the causal relationship in clauses. PBJE can balance the information flow among emotion clauses, cause clauses and pairs. From a multi-relational perspective, we construct a heterogeneous undirected graph and apply the Relational Graph Convolutional Network (RGCN) to capture the multiplex relationship between clauses and the relationship between pairs and clauses. Experimental results show that PBJE achieves state-of-the-art performance on the Chinese benchmark corpus.
null
null
10.18653/v1/2022.emnlp-main.358
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,488
inproceedings
tang-etal-2022-affective
Affective Knowledge Enhanced Multiple-Graph Fusion Networks for Aspect-based Sentiment Analysis
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.359/
Tang, Siyu and Chai, Heyan and Yao, Ziyi and Ding, Ye and Gao, Cuiyun and Fang, Binxing and Liao, Qing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5352--5362
Aspect-based sentiment analysis aims to identify sentiment polarity of social media users toward different aspects. Most recent methods adopt the aspect-centric latent tree to connect aspects and their corresponding opinion words, thinking that would facilitate establishing the relationship between aspects and opinion words.However, these methods ignore the roles of syntax dependency relation labels and affective semantic information in determining the sentiment polarity, resulting in the wrong prediction.In this paper, we propose a novel multi-graph fusion network (MGFN) based on latent graph to leverage the richer syntax dependency relation label information and affective semantic information of words.Specifically, we construct a novel syntax-aware latent graph (SaLG) to fully leverage the syntax dependency relation label information to facilitate the learning of sentiment representations. Subsequently, a multi-graph fusion module is proposed to fuse semantic information of surrounding contexts of aspects adaptively. Furthermore, we design an affective refinement strategy to guide the MGFN to capture significant affective clues. Extensive experiments on three datasets demonstrate that our MGFN model outperforms all state-of-the-art methods and verify the effectiveness of our model.
null
null
10.18653/v1/2022.emnlp-main.359
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,489
inproceedings
kumar-etal-2022-indicnlg
{I}ndic{NLG} Benchmark: Multilingual Datasets for Diverse {NLG} Tasks in {I}ndic Languages
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.360/
Kumar, Aman and Shrotriya, Himani and Sahu, Prachi and Mishra, Amogh and Dabre, Raj and Puduppully, Ratish and Kunchukuttan, Anoop and Khapra, Mitesh M. and Kumar, Pratyush
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5363--5394
Natural Language Generation (NLG) for non-English languages is hampered by the scarcity of datasets in these languages. We present the IndicNLG Benchmark, a collection of datasets for benchmarking NLG for 11 Indic languages. We focus on five diverse tasks, namely, biography generation using Wikipedia infoboxes, news headline generation, sentence summarization, paraphrase generation and, question generation. We describe the created datasets and use them to benchmark the performance of several monolingual and multilingual baselines that leverage pre-trained sequence-to-sequence models. Our results exhibit the strong performance of multilingual language-specific pre-trained models, and the utility of models trained on our dataset for other related NLG tasks. Our dataset creation methods can be easily applied to modest-resource languages as they involve simple steps such as scraping news articles and Wikipedia infoboxes, light cleaning, and pivoting through machine translation data. To the best of our knowledge, the IndicNLG Benchmark is the first NLG benchmark for Indic languages and the most diverse multilingual NLG dataset, with approximately 8M examples across 5 tasks and 11 languages. The datasets and models will be publicly available.
null
null
10.18653/v1/2022.emnlp-main.360
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,490
inproceedings
batheja-bhattacharyya-2022-improving
Improving Machine Translation with Phrase Pair Injection and Corpus Filtering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.361/
Batheja, Akshay and Bhattacharyya, Pushpak
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5395--5400
In this paper, we show that the combination of Phrase Pair Injection and Corpus Filtering boosts the performance of Neural Machine Translation (NMT) systems. We extract parallel phrases and sentences from the pseudo-parallel corpus and augment it with the parallel corpus to train the NMT models. With the proposed approach, we observe an improvement in the Machine Translation (MT) system for 3 low-resource language pairs, Hindi-Marathi, English-Marathi, and English-Pashto, and 6 translation directions by up to 2.7 BLEU points, on the FLORES test data. These BLEU score improvements are over the models trained using the whole pseudo-parallel corpus augmented with the parallel corpus.
null
null
10.18653/v1/2022.emnlp-main.361
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,491
inproceedings
wang-etal-2022-anchor
An Anchor-based Relative Position Embedding Method for Cross-Modal Tasks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.362/
Wang, Ya and Sun, Xingwu and Fengzong, Lian and Kang, ZhanHui and Xu, Chengzhong Xu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5401--5413
Position Embedding (PE) is essential for transformer to capture the sequence ordering of input tokens. Despite its general effectiveness verified in Natural Language Processing (NLP) and Computer Vision (CV), its application in cross-modal tasks remains unexplored and suffers from two challenges: 1) the input text tokens and image patches are not aligned, 2) the encoding space of each modality is different, making it unavailable for feature comparison. In this paper, we propose a unified position embedding method for these problems, called AnChor-basEd Relative Position Embedding (ACE-RPE), in which we first introduce an anchor locating mechanism to bridge the semantic gap and locate anchors from different modalities. Then we conduct the distance calculation of each text token and image patch by computing their shortest paths from the located anchors. Last, we embed the anchor-based distance to guide the computation of cross-attention. In this way, it calculates cross-modal relative position embedding for cross-modal transformer. Benefiting from ACE-RPE, our method obtains new SOTA results on a wide range of benchmarks, such as Image-Text Retrieval on MS-COCO and Flickr30K, Visual Entailment on SNLI-VE, Visual Reasoning on NLVR2 and Weakly-supervised Visual Grounding on RefCOCO+.
null
null
10.18653/v1/2022.emnlp-main.362
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,492
inproceedings
lu-zhang-2022-norm
Norm-based Noisy Corpora Filtering and Refurbishing in Neural Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.363/
Lu, Yu and Zhang, Jiajun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5414--5425
Recent advances in neural machine translation depend on massive parallel corpora, which are collected from any open source without much guarantee of quality. It stresses the need for noisy corpora filtering, but existing methods are insufficient to solve this issue. They spend much time ensembling multiple scorers trained on clean bitexts, unavailable for low-resource languages in practice. In this paper, we propose a norm-based noisy corpora filtering and refurbishing method with no external data and costly scorers. The noisy and clean samples are separated based on how much information from the source and target sides the model requires to fit the given translation. For the unparallel sentence, the target-side history translation is much more important than the source context, contrary to the parallel ones. The amount of these two information flows can be measured by norms of source-/target-side context vectors. Moreover, we propose to reuse the discovered noisy data by generating pseudo labels via online knowledge distillation. Extensive experiments show that our proposed filtering method performs comparably with state-of-the-art noisy corpora filtering techniques but is more efficient and easier to operate. Noisy sample refurbishing further enhances the performance by making the most of the given data.
null
null
10.18653/v1/2022.emnlp-main.363
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,493
inproceedings
ju-etal-2022-telemelody
{T}ele{M}elody: Lyric-to-Melody Generation with a Template-Based Two-Stage Method
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.364/
Ju, Zeqian and Lu, Peiling and Tan, Xu and Wang, Rui and Zhang, Chen and Wu, Songruoyao and Zhang, Kejun and Li, Xiang-Yang and Qin, Tao and Liu, Tie-Yan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5426--5437
Lyric-to-melody generation is an important task in automatic songwriting. Previous lyric-to-melody generation systems usually adopt end-to-end models that directly generate melodies from lyrics, which suffer from several issues: 1) lack of paired lyric-melody training data; 2) lack of control on generated melodies. In this paper, we develop TeleMelody, a two-stage lyric-to-melody generation system with music template (e.g., tonality, chord progression, rhythm pattern, and cadence) to bridge the gap between lyrics and melodies (i.e., the system consists of a lyric-to-template module and a template-to-melody module). TeleMelody has two advantages. First, it is data efficient. The template-to-melody module is trained in a self-supervised way (i.e., the source template is extracted from the target melody) that does not need any lyric-melody paired data. The lyric-to-template module is made up of some rules and a lyric-to-rhythm model, which is trained with paired lyric-rhythm data that is easier to obtain than paired lyric-melody data. Second, it is controllable. The design of the template ensures that the generated melodies can be controlled by adjusting the musical elements in the template. Both subjective and objective experimental evaluations demonstrate that TeleMelody generates melodies with higher quality, better controllability, and less requirement on paired lyric-melody data than previous generation systems.
null
null
10.18653/v1/2022.emnlp-main.364
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,494
inproceedings
lin-etal-2022-seen
{SEEN}: Structured Event Enhancement Network for Explainable Need Detection of Information Recall Assistance
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.365/
Lin, You-En and Yen, An-Zi and Huang, Hen-Hsen and Chen, Hsin-Hsi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5438--5451
When recalling life experiences, people often forget or confuse life events, which necessitates information recall services. Previous work on information recall focuses on providing such assistance reactively, i.e., by retrieving the life event of a given query. Proactively detecting the need for information recall services is rarely discussed. In this paper, we use a human-annotated life experience retelling dataset to detect the right time to trigger the information recall service. We propose a pilot model{---}structured event enhancement network (SEEN) that detects life event inconsistency, additional information in life events, and forgotten events. A fusing mechanism is also proposed to incorporate event graphs of stories and enhance the textual representations. To explain the need detection results, SEEN simultaneously provides support evidence by selecting the related nodes from the event graph. Experimental results show that SEEN achieves promising performance in detecting information needs. In addition, the extracted evidence can be served as complementary information to remind users what events they may want to recall.
null
null
10.18653/v1/2022.emnlp-main.365
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,495
inproceedings
cho-etal-2022-rethinking
Rethinking Style Transformer with Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.366/
Cho, Hojun and Kim, Dohee and Ryu, Seungwoo and Park, ChaeHun and Noh, Hyungjong and Hwang, Jeong-in and Choi, Minseok and Choi, Edward and Choo, Jaegul
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5452--5467
Style control, content preservation, and fluency determine the quality of text style transfer models. To train on a nonparallel corpus, several existing approaches aim to deceive the style discriminator with an adversarial loss. However, adversarial training significantly degrades fluency compared to the other two metrics. In this work, we explain this phenomenon using energy-based interpretation, and leverage a pretrained language model to improve fluency. Specifically, we propose a novel approach which applies the pretrained language model to the text style transfer framework by restructuring the discriminator and the model itself, allowing the generator and the discriminator to also take advantage of the power of the pretrained model. We evaluated our model on three public benchmarks GYAFC, Amazon, and Yelp and achieved state-of-the-art performance on the overall metrics.
null
null
10.18653/v1/2022.emnlp-main.366
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,496
inproceedings
jiang-etal-2022-towards
Towards Robust k-Nearest-Neighbor Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.367/
Jiang, Hui and Lu, Ziyao and Meng, Fandong and Zhou, Chulun and Zhou, Jie and Huang, Degen and Su, Jinsong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5468--5477
k-Nearest-Neighbor Machine Translation (kNN-MT) becomes an important research direction of NMT in recent years. Its main idea is to retrieve useful key-value pairs from an additional datastore to modify translations without updating the NMT model. However, the underlying retrieved noisy pairs will dramatically deteriorate the model performance. In this paper, we conduct a preliminary study and find that this problem results from not fully exploiting the prediction of the NMT model. To alleviate the impact of noise, we propose a confidence-enhanced kNN-MT model with robust training. Concretely, we introduce the NMT confidence to refine the modeling of two important components of kNN-MT: kNN distribution and the interpolation weight. Meanwhile we inject two types of perturbations into the retrieved pairs for robust training. Experimental results on four benchmark datasets demonstrate that our model not only achieves significant improvements over current kNN-MT models, but also exhibits better robustness. Our code is available at https://github.com/DeepLearnXMU/Robust-knn-mt.
null
null
10.18653/v1/2022.emnlp-main.367
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,497
inproceedings
yu-etal-2022-tiny
Tiny-{N}ews{R}ec: Effective and Efficient {PLM}-based News Recommendation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.368/
Yu, Yang and Wu, Fangzhao and Wu, Chuhan and Yi, Jingwei and Liu, Qi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5478--5489
News recommendation is a widely adopted technique to provide personalized news feeds for the user. Recently, pre-trained language models (PLMs) have demonstrated the great capability of natural language understanding and benefited news recommendation via improving news modeling. However, most existing works simply finetune the PLM with the news recommendation task, which may suffer from the known domain shift problem between the pre-training corpus and downstream news texts. Moreover, PLMs usually contain a large volume of parameters and have high computational overhead, which imposes a great burden on low-latency online services. In this paper, we propose Tiny-NewsRec, which can improve both the effectiveness and the efficiency of PLM-based news recommendation. We first design a self-supervised domain-specific post-training method to better adapt the general PLM to the news domain with a contrastive matching task between news titles and news bodies. We further propose a two-stage knowledge distillation method to improve the efficiency of the large PLM-based news recommendation model while maintaining its performance. Multiple teacher models originated from different time steps of our post-training procedure are used to transfer comprehensive knowledge to the student model in both its post-training stage and finetuning stage. Extensive experiments on two real-world datasets validate the effectiveness and efficiency of our method.
null
null
10.18653/v1/2022.emnlp-main.368
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,498
inproceedings
choi-etal-2022-tabs
{TABS}: Efficient Textual Adversarial Attack for Pre-trained {NL} Code Model Using Semantic Beam Search
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.369/
Choi, YunSeok and Kim, Hyojun and Lee, Jee-Hyong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5490--5498
As pre-trained models have shown successful performance in program language processing as well as natural language processing, adversarial attacks on these models also attract attention.However, previous works on black-box adversarial attacks generated adversarial examples in a very inefficient way with simple greedy search. They also failed to find out better adversarial examples because it was hard to reduce the search space without performance loss.In this paper, we propose TABS, an efficient beam search black-box adversarial attack method. We adopt beam search to find out better adversarial examples, and contextual semantic filtering to effectively reduce the search space. Contextual semantic filtering reduces the number of candidate adversarial words considering the surrounding context and the semantic similarity.Our proposed method shows good performance in terms of attack success rate, the number of queries, and semantic similarity in attacking models for two tasks: NL code search classification and retrieval tasks.
null
null
10.18653/v1/2022.emnlp-main.369
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,499
inproceedings
liu-etal-2022-investigating
Investigating the Robustness of Natural Language Generation from Logical Forms via Counterfactual Samples
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.370/
Liu, Chengyuan and Gan, Leilei and Kuang, Kun and Wu, Fei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5499--5512
The aim of Logic2Text is to generate controllable and faithful texts conditioned on tables and logical forms, which not only requires a deep understanding of the tables and logical forms, but also warrants symbolic reasoning over the tables according to the logical forms. State-of-the-art methods based on pre-trained models have achieved remarkable performance on the standard test dataset. However, we question whether these methods really learn how to perform logical reasoning, rather than just relying on the spurious correlations between the headers of the tables and operators of the logical form. To verify this hypothesis, we manually construct a set of counterfactual samples, which modify the original logical forms to generate counterfactual logical forms with rare co-occurred headers and operators and corresponding counterfactual references. SOTA methods give much worse results on these counterfactual samples compared with the results on the original test dataset, which verifies our hypothesis. To deal with this problem, we firstly analyze this bias from a causal perspective, based on which we propose two approaches to reduce the model`s reliance on the shortcut. The first one incorporates the hierarchical structure of the logical forms into the model. The second one exploits automatically generated counterfactual data for training. Automatic and manual experimental results on the original test dataset and counterfactual dataset show that our method is effective to alleviate the spurious correlation. Our work points out the weakness of current methods and takes a further step toward developing Logic2Text models with real logical reasoning ability.
null
null
10.18653/v1/2022.emnlp-main.370
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,500
inproceedings
wang-etal-2022-helping
Helping the Weak Makes You Strong: Simple Multi-Task Learning Improves Non-Autoregressive Translators
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.371/
Wang, Xinyou and Zheng, Zaixiang and Huang, Shujian
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5513--5519
Recently, non-autoregressive (NAR) neural machine translation models have received increasing attention due to their efficient parallel decoding.However, the probabilistic framework of NAR models necessitates conditional independence assumption on target sequences, falling short of characterizing human language data.This drawback results in less informative learning signals for NAR models under conventional MLE training, thereby yielding unsatisfactory accuracy compared to their autoregressive (AR) counterparts.In this paper, we propose a simple and model-agnostic multi-task learning framework to provide more informative learning signals.During training stage, we introduce a set of sufficiently weak AR decoders that solely rely on the information provided by NAR decoder to make prediction, forcing the NAR decoder to become stronger or else it will be unable to support its weak AR partners.Experiments on WMT and IWSLT datasets show that our approach can consistently improve accuracy of multiple NAR baselines without adding any additional decoding overhead.
null
null
10.18653/v1/2022.emnlp-main.371
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,501
inproceedings
shi-etal-2022-race
{RACE}: Retrieval-augmented Commit Message Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.372/
Shi, Ensheng and Wang, Yanlin and Tao, Wei and Du, Lun and Zhang, Hongyu and Han, Shi and Zhang, Dongmei and Sun, Hongbin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5520--5530
Commit messages are important for software development and maintenance. Many neural network-based approaches have been proposed and shown promising results on automatic commit message generation. However, the generated commit messages could be repetitive or redundant. In this paper, we propose RACE, a new retrieval-augmented neural commit message generation method, which treats the retrieved similar commit as an exemplar and leverages it to generate an accurate commit message. As the retrieved commit message may not always accurately describe the content/intent of the current code diff, we also propose an exemplar guider, which learns the semantic similarity between the retrieved and current code diff and then guides the generation of commit message based on the similarity. We conduct extensive experiments on a large public dataset with five programming languages. Experimental results show that RACE can outperform all baselines. Furthermore, RACE can boost the performance of existing Seq2Seq models in commit message generation.
null
null
10.18653/v1/2022.emnlp-main.372
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,502
inproceedings
liu-etal-2022-plog
{PLOG}: Table-to-Logic Pretraining for Logical Table-to-Text Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.373/
Liu, Ao and Dong, Haoyu and Okazaki, Naoaki and Han, Shi and Zhang, Dongmei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5531--5546
Logical table-to-text generation is a task that involves generating logically faithful sentences from tables, which requires models to derive logical-level facts from table records via logical inference. It raises a new challenge on the logical-level content planning of table-to-text models. However, directly learning the logical inference knowledge from table-text pairs is very difficult for neural models because of the ambiguity of natural language and the scarcity of parallel data. Hence even large-scale pre-trained language models present low logical fidelity on logical table-to-text. In this work, we propose a Pretrained Logical Form Generator (PLOG) framework to improve generation fidelity. Specifically, PLOG is first pretrained on a table-to-logical-form generation (table-to-logic) task, then finetuned on downstream table-to-text tasks. The logical forms are formally defined with unambiguous semantics. Hence we can collect a large amount of accurate logical forms from tables without human annotation. In addition, PLOG can learn logical inference from table-logic pairs much more reliably than from table-text pairs. To evaluate our model, we further collect a controlled logical table-to-text dataset CONTLOG based on an existing dataset. On two benchmarks, LOGICNLG and CONTLOG, PLOG outperforms strong baselines by a large margin on the logical fidelity, demonstrating the effectiveness of table-to-logic pretraining.
null
null
10.18653/v1/2022.emnlp-main.373
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,503
inproceedings
yu-etal-2022-ghan
{GHAN}: Graph-Based Hierarchical Aggregation Network for Text-Video Retrieval
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.374/
Yu, Yahan and Hu, Bojie and Li, Yu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5547--5557
Text-video retrieval focuses on two aspects: cross-modality interaction and video-language encoding. Currently, the mainstream approach is to train a joint embedding space for multimodal interactions. However, there are structural and semantic differences between text and video, making this approach challenging for fine-grained understanding. In order to solve this, we propose an end-to-end graph-based hierarchical aggregation network for text-video retrieval according to the hierarchy possessed by text and video. We design a token-level weighted network to refine intra-modality representations and construct a graph-based message passing attention network for global-local alignment across modality. We conduct experiments on the public datasets MSR-VTT-9K, MSR-VTT-7K and MSVD, and achieve Recall@1 of 73.0{\%}, 65.6{\%}, and 64.0{\%} , which is 25.7{\%}, 16.5{\%}, and 14.2{\%} better than the current state-of-the-art model.
null
null
10.18653/v1/2022.emnlp-main.374
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,504
inproceedings
chen-etal-2022-murag
{M}u{RAG}: Multimodal Retrieval-Augmented Generator for Open Question Answering over Images and Text
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.375/
Chen, Wenhu and Hu, Hexiang and Chen, Xi and Verga, Pat and Cohen, William
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5558--5570
While language Models store a massive amount of world knowledge implicitly in their parameters, even very large models often fail to encode information about rare entities and events, while incurring huge computational costs. Recently, retrieval-augmented models, such as REALM, RAG, and RETRO, have incorporated world knowledge into language generation by leveraging an external non-parametric index and have demonstrated impressive performance with constrained model sizes. However, these methods are restricted to retrieving only textual knowledge, neglecting the ubiquitous amount of knowledge in other modalities like images {--} much of which contains information not covered by any text. To address this limitation, we propose the first Multimodal Retrieval-Augmented Transformer (MuRAG), which accesses an external non-parametric multimodal memory to augment language generation. MuRAG is pre-trained with a mixture of large-scale image-text and text-only corpora using a joint contrastive and generative loss. We perform experiments on two different datasets that require retrieving and reasoning over both images and text to answer a given query: WebQA, and MultimodalQA. Our results show that MuRAG achieves state-of-the-art accuracy, outperforming existing models by 10-20{\%} absolute on both datasets and under both distractor and full-wiki settings.
null
null
10.18653/v1/2022.emnlp-main.375
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,505
inproceedings
sun-etal-2022-phee
{PHEE}: A Dataset for Pharmacovigilance Event Extraction from Text
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.376/
Sun, Zhaoyue and Li, Jiazheng and Pergola, Gabriele and Wallace, Byron and John, Bino and Greene, Nigel and Kim, Joseph and He, Yulan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5571--5587
The primary goal of drug safety researchers and regulators is to promptly identify adverse drug reactions. Doing so may in turn prevent or reduce the harm to patients and ultimately improve public health. Evaluating and monitoring drug safety (i.e., pharmacovigilance) involves analyzing an ever growing collection of spontaneous reports from health professionals, physicians, and pharmacists, and information voluntarily submitted by patients. In this scenario, facilitating analysis of such reports via automation has the potential to rapidly identify safety signals. Unfortunately, public resources for developing natural language models for this task are scant. We present PHEE, a novel dataset for pharmacovigilance comprising over 5000 annotated events from medical case reports and biomedical literature, making it the largest such public dataset to date. We describe the hierarchical event schema designed to provide coarse and fine-grained information about patients' demographics, treatments and (side) effects. Along with the discussion of the dataset, we present a thorough experimental evaluation of current state-of-the-art approaches for biomedical event extraction, point out their limitations, and highlight open challenges to foster future research in this area.
null
null
10.18653/v1/2022.emnlp-main.376
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,506
inproceedings
cao-zhang-2022-otseq2set
{OTS}eq2{S}et: An Optimal Transport Enhanced Sequence-to-Set Model for Extreme Multi-label Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.377/
Cao, Jie and Zhang, Yin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5588--5597
Extreme multi-label text classification (XMTC) is the task of finding the most relevant subset labels from an extremely large-scale label collection. Recently, some deep learning models have achieved state-of-the-art results in XMTC tasks. These models commonly predict scores for all labels by a fully connected layer as the last layer of the model. However, such models can`t predict a relatively complete and variable-length label subset for each document, because they select positive labels relevant to the document by a fixed threshold or take top k labels in descending order of scores. A less popular type of deep learning models called sequence-to-sequence (Seq2Seq) focus on predicting variable-length positive labels in sequence style. However, the labels in XMTC tasks are essentially an unordered set rather than an ordered sequence, the default order of labels restrains Seq2Seq models in training. To address this limitation in Seq2Seq, we propose an autoregressive sequence-to-set model for XMTC tasks named OTSeq2Set. Our model generates predictions in student-forcing scheme and is trained by a loss function based on bipartite matching which enables permutation-invariance. Meanwhile, we use the optimal transport distance as a measurement to force the model to focus on the closest labels in semantic label space. Experiments show that OTSeq2Set outperforms other competitive baselines on 4 benchmark datasets. Especially, on the Wikipedia dataset with 31k labels, it outperforms the state-of-the-art Seq2Seq method by 16.34{\%} in micro-F1 score. The code is available at https://github.com/caojie54/OTSeq2Set.
null
null
10.18653/v1/2022.emnlp-main.377
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,507
inproceedings
han-etal-2022-simqa
{S}im{QA}: Detecting Simultaneous {MT} Errors through Word-by-Word Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.378/
Han, HyoJung and Carpuat, Marine and Boyd-Graber, Jordan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5598--5616
Detractors of neural machine translation admit that while its translations are fluent, it sometimes gets key facts wrong. This is particularly important in simultaneous interpretation where translations have to be provided as fast as possible: before a sentence is complete. Yet, evaluations of simultaneous machine translation (SimulMT) fail to capture if systems correctly translate the most salient elements of a question: people, places, and dates. To address this problem, we introduce a downstream word-by-word question answering evaluation task (SimQA): given a source language question, translate the question word by word into the target language, and answer as soon as possible. SimQA jointly measures whether the SimulMT models translate the question quickly and accurately, and can reveal shortcomings in existing neural systems{---}hallucinating or omitting facts.
null
null
10.18653/v1/2022.emnlp-main.378
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,508
inproceedings
xie-etal-2022-discovering
Discovering Low-rank Subspaces for Language-agnostic Multilingual Representations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.379/
Xie, Zhihui and Zhao, Handong and Yu, Tong and Li, Shuai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5617--5633
Large pretrained multilingual language models (ML-LMs) have shown remarkable capabilities of zero-shot cross-lingual transfer, without direct cross-lingual supervision. While these results are promising, follow-up works found that, within the multilingual embedding spaces, there exists strong language identity information which hinders the expression of linguistic factors shared across languages. For semantic tasks like cross-lingual sentence retrieval, it is desired to remove such language identity signals to fully leverage semantic information. In this work, we provide a novel view of projecting away language-specific factors from a multilingual embedding space. Specifically, we discover that there exists a low-rank subspace that primarily encodes information irrelevant to semantics (e.g., syntactic information). To identify this subspace, we present a simple but effective unsupervised method based on singular value decomposition with multiple monolingual corpora as input. Once the subspace is found, we can directly project the original embeddings into the null space to boost language agnosticism without finetuning. We systematically evaluate our method on various tasks including the challenging language-agnostic QA retrieval task. Empirical results show that applying our method consistently leads to improvements over commonly used ML-LMs.
null
null
10.18653/v1/2022.emnlp-main.379
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,509
inproceedings
brad-etal-2022-rethinking
Rethinking the Authorship Verification Experimental Setups
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.380/
Brad, Florin and Manolache, Andrei and Burceanu, Elena and Barbalau, Antonio and Ionescu, Radu Tudor and Popescu, Marius
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5634--5643
One of the main drivers of the recent advances in authorship verification is the PAN large-scale authorship dataset. Despite generating significant progress in the field, inconsistent performance differences between the closed and open test sets have been reported. To this end, we improve the experimental setup by proposing five new public splits over the PAN dataset, specifically designed to isolate and identify biases related to the text topic and to the author`s writing style. We evaluate several BERT-like baselines on these splits, showing that such models are competitive with authorship verification state-of-the-art methods. Furthermore, using explainable AI, we find that these baselines are biased towards named entities. We show that models trained without the named entities obtain better results and generalize better when tested on DarkReddit, our new dataset for authorship verification.
null
null
10.18653/v1/2022.emnlp-main.380
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,510
inproceedings
xu-li-2022-borrowing
Borrowing Human Senses: Comment-Aware Self-Training for Social Media Multimodal Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.381/
Xu, Chunpu and Li, Jing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5644--5656
Social media is daily creating massive multimedia content with paired image and text, presenting the pressing need to automate the vision and language understanding for various multimodal classification tasks. Compared to the commonly researched visual-lingual data, social media posts tend to exhibit more implicit image-text relations. To better glue the cross-modal semantics therein, we capture hinting features from user comments, which are retrieved via jointly leveraging visual and lingual similarity. Afterwards, the classification tasks are explored via self-training in a teacher-student framework, motivated by the usually limited labeled data scales in existing benchmarks. Substantial experiments are conducted on four multimodal social media benchmarks for image-text relation classification, sarcasm detection, sentiment classification, and hate speech detection. The results show that our method further advances the performance of previous state-of-the-art models, which do not employ comment modeling or self-training.
null
null
10.18653/v1/2022.emnlp-main.381
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,511
inproceedings
zhong-etal-2022-training
Training Language Models with Memory Augmentation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.382/
Zhong, Zexuan and Lei, Tao and Chen, Danqi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5657--5673
Recent work has improved language models (LMs) remarkably by equipping them with a non-parametric memory component. However, most existing approaches only introduce mem-ories at testing time or represent them using a separately trained encoder, resulting in suboptimal training of the language model. In this work, we present TRIME, a novel yet simple training approach designed for training LMs with memory augmentation. Our approach uses a training objective that directly takes in-batch examples as accessible memory. We also present new methods for memory construction and data batching, which are used for adapting to different sets of memories{---}local, long-term, and external memory{---}at testing time. We evaluate TRIME on multiple language modeling and machine translation benchmarks and show that it is able to achieve significant improvements across all the settings. Concretely, TRIME reduces the perplexity from 18.70 to 15.37 on WIKITEXT-103, by effectively leveraging a large memory set from the training corpus. Compared to standard LM training, TRIME adds negligible computational overhead and is compatible with different neural architectures, making it a versatile solution for training memory-augmented LMs.
null
null
10.18653/v1/2022.emnlp-main.382
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,512
inproceedings
rottger-etal-2022-data
Data-Efficient Strategies for Expanding Hate Speech Detection into Under-Resourced Languages
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.383/
R{\"ottger, Paul and Nozza, Debora and Bianchi, Federico and Hovy, Dirk
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5674--5691
Hate speech is a global phenomenon, but most hate speech datasets so far focus on English-language content. This hinders the development of more effective hate speech detection models in hundreds of languages spoken by billions across the world. More data is needed, but annotating hateful content is expensive, time-consuming and potentially harmful to annotators. To mitigate these issues, we explore data-efficient strategies for expanding hate speech detection into under-resourced languages. In a series of experiments with mono- and multilingual models across five non-English languages, we find that 1) a small amount of target-language fine-tuning data is needed to achieve strong performance, 2) the benefits of using more such data decrease exponentially, and 3) initial fine-tuning on readily-available English data can partially substitute target-language data and improve model generalisability. Based on these findings, we formulate actionable recommendations for hate speech detection in low-resource language settings.
null
null
10.18653/v1/2022.emnlp-main.383
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,513
inproceedings
liu-etal-2022-dimension
Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.384/
Liu, Zhenghao and Zhang, Han and Xiong, Chenyan and Liu, Zhiyuan and Gu, Yu and Li, Xiaohua
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5692--5698
Dense retrievers encode queries and documents and map them in an embedding space using pre-trained language models. These embeddings need to be high-dimensional to fit training signals and guarantee the retrieval effectiveness of dense retrievers. However, these high-dimensional embeddings lead to larger index storage and higher retrieval latency. To reduce the embedding dimensions of dense retrieval, this paper proposes a Conditional Autoencoder (ConAE) to compress the high-dimensional embeddings to maintain the same embedding distribution and better recover the ranking features. Our experiments show that ConAE is effective in compressing embeddings by achieving comparable ranking performance with its teacher model and making the retrieval system more efficient. Our further analyses show that ConAE can alleviate the redundancy of the embeddings of dense retrieval with only one linear layer. All codes of this work are available at https://github.com/NEUIR/ConAE.
null
null
10.18653/v1/2022.emnlp-main.384
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,514
inproceedings
slobodkin-etal-2022-controlled
Controlled Text Reduction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.385/
Slobodkin, Aviv and Roit, Paul and Hirsch, Eran and Ernst, Ori and Dagan, Ido
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5699--5715
Producing a reduced version of a source text, as in generic or focused summarization, inherently involves two distinct subtasks: deciding on targeted content and generating a coherent text conveying it. While some popular approaches address summarization as a single end-to-end task, prominent works support decomposed modeling for individual subtasks. Further, semi-automated text reduction is also very appealing, where users may identify targeted content while models would generate a corresponding coherent summary.In this paper, we focus on the second subtask, of generating coherent text given pre-selected content. Concretely, we formalize \textit{Controlled Text Reduction} as a standalone task, whose input is a source text with marked spans of targeted content ({\textquotedblleft}highlighting{\textquotedblright}).A model then needs to generate a coherent text that includes all and only the target information.We advocate the potential of such models, both for modular fully-automatic summarization, as well as for semi-automated human-in-the-loop use cases.Facilitating proper research, we crowdsource high-quality dev and test datasets for the task. Further, we automatically generate a larger {\textquotedblleft}silver{\textquotedblright} training dataset from available summarization benchmarks, leveraging a pretrained summary-source alignment model.Finally, employing these datasets, we present a supervised baseline model, showing promising results and insightful analyses.
null
null
10.18653/v1/2022.emnlp-main.385
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,515
inproceedings
guo-etal-2022-questioning
Questioning the Validity of Summarization Datasets and Improving Their Factual Consistency
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.386/
Guo, Yanzhu and Clavel, Chlo{\'e} and Kamal Eddine, Moussa and Vazirgiannis, Michalis
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5716--5727
The topic of summarization evaluation has recently attracted a surge of attention due to the rapid development of abstractive summarization systems. However, the formulation of the task is rather ambiguous, neither the linguistic nor the natural language processing communities have succeeded in giving a mutually agreed-upon definition. Due to this lack of well-defined formulation, a large number of popular abstractive summarization datasets are constructed in a manner that neither guarantees validity nor meets one of the most essential criteria of summarization: factual consistency. In this paper, we address this issue by combining state-of-the-art factual consistency models to identify the problematic instances present in popular summarization datasets. We release SummFC, a filtered summarization dataset with improved factual consistency, and demonstrate that models trained on this dataset achieve improved performance in nearly all quality aspects. We argue that our dataset should become a valid benchmark for developing and evaluating summarization systems.
null
null
10.18653/v1/2022.emnlp-main.386
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,516
inproceedings
peyrard-etal-2022-invariant
Invariant Language Modeling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.387/
Peyrard, Maxime and Ghotra, Sarvjeet and Josifoski, Martin and Agarwal, Vidhan and Patra, Barun and Carignan, Dean and Kiciman, Emre and Tiwary, Saurabh and West, Robert
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5728--5743
Modern pretrained language models are critical components of NLP pipelines. Yet, they suffer from spurious correlations, poor out-of-domain generalization, and biases.Inspired by recent progress in causal machine learning, in particular the invariant risk minimization (IRM) paradigm, we propose invariant language modeling, a framework for learning invariant representations that generalize better across multiple environments. In particular, we adapt a game-theoretic implementation of IRM (IRM-games) to language models, where the invariance emerges from a specific training schedule in which all the environments compete to optimize their own environment-specific loss by updating subsets of the model in a round-robin fashion.We focused on controlled experiments to precisely demonstrate the ability of our method to (i) remove structured noise, (ii) ignore specific spurious correlations without affecting global performance, and (iii) achieve better out-of-domain generalization.These benefits come with a negligible computational overhead compared to standard training, do not require changing the local loss, and can be applied to any language model. We believe this framework is promising to help mitigate spurious correlations and biases in language models.
null
null
10.18653/v1/2022.emnlp-main.387
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,517
inproceedings
wang-etal-2022-adamix
{A}da{M}ix: Mixture-of-Adaptations for Parameter-efficient Model Tuning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.388/
Wang, Yaqing and Agarwal, Sahaj and Mukherjee, Subhabrata and Liu, Xiaodong and Gao, Jing and Awadallah, Ahmed Hassan and Gao, Jianfeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5744--5760
Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models. To address this, parameter-efficient fine-tuning (PEFT) techniques were introduced where small trainable components are injected in the PLM and updated during fine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of adaptation modules {--} given the underlying PEFT method of choice {--} introduced in each Transformer layer while keeping most of the PLM weights frozen. For instance, AdaMix can leverage a mixture of adapters like Houlsby or a mixture of low rank decomposition matrices like LoRA to improve downstream task performance over the corresponding PEFT methods for fully supervised and few-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the same computational cost and the number of tunable parameters as the underlying PEFT method. By only tuning 0.1-0.2{\%} of PLM parameters, we show that AdaMix outperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for both NLU and NLG tasks.
null
null
10.18653/v1/2022.emnlp-main.388
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,518
inproceedings
wolhandler-etal-2022-multi
How {\textquotedblleft}Multi{\textquotedblright} is Multi-Document Summarization?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.389/
Wolhandler, Ruben and Cattan, Arie and Ernst, Ori and Dagan, Ido
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5761--5769
The task of multi-document summarization (MDS) aims at models that, given multiple documents as input, are able to generate a summary that combines disperse information, originally spread {\_}{\_}across{\_}{\_} these documents. Accordingly, it is expected that both reference summaries in MDS datasets, as well as system summaries, would indeed be based on such dispersed information. In this paper, we argue for quantifying and assessing this expectation. To that end, we propose an automated measure for evaluating the degree to which a summary is {\textquotedblleft}disperse{\textquotedblright}, in the sense of the number of source documents needed to cover its content. We apply our measure to empirically analyze several popular MDS datasets, with respect to their reference summaries, as well as the output of state-of-the-art systems. Our results show that certain MDS datasets barely require combining information from multiple documents, where a single document often covers the full summary content. Overall, we advocate using our metric for assessing and improving the degree to which summarization datasets require combining multi-document information, and similarly how summarization models actually meet this challenge.
null
null
10.18653/v1/2022.emnlp-main.389
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,519
inproceedings
frisoni-etal-2022-bioreader
{B}io{R}eader: a Retrieval-Enhanced Text-to-Text Transformer for Biomedical Literature
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.390/
Frisoni, Giacomo and Mizutani, Miki and Moro, Gianluca and Valgimigli, Lorenzo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5770--5793
The latest batch of research has equipped language models with the ability to attend over relevant and factual information from non-parametric external sources, drawing a complementary path to architectural scaling. Besides mastering language, exploiting and contextualizing the latent world knowledge is crucial in complex domains like biomedicine. However, most works in the field rely on general-purpose models supported by databases like Wikipedia and Books. We introduce BioReader, the first retrieval-enhanced text-to-text model for biomedical natural language processing. Our domain-specific T5-based solution augments the input prompt by fetching and assembling relevant scientific literature chunks from a neural database with {\ensuremath{\approx}}60 million tokens centered on PubMed. We fine-tune and evaluate BioReader on a broad array of downstream tasks, significantly outperforming several state-of-the-art methods despite using up to 3x fewer parameters. In tandem with extensive ablation studies, we show that domain knowledge can be easily altered or supplemented to make the model generate correct predictions bypassing the retraining step and thus addressing the literature overload issue.
null
null
10.18653/v1/2022.emnlp-main.390
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,520
inproceedings
duquenne-etal-2022-modules
{T}-Modules: Translation Modules for Zero-Shot Cross-Modal Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.391/
Duquenne, Paul-Ambroise and Gong, Hongyu and Sagot, Beno{\^i}t and Schwenk, Holger
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5794--5806
We present a new approach to perform zero-shot cross-modal transfer between speech and text for translation tasks. Multilingual speech and text are encoded in a joint fixed-size representation space. Then, we compare different approaches to decode these multimodal and multilingual fixed-size representations, enabling zero-shot translation between languages and modalities. All our models are trained without the need of cross-modal labeled translation data.Despite a fixed-size representation, we achieve very competitive results on several text and speech translation tasks. In particular, we significantly improve the state-of-the-art for zero-shot speech translation on Must-C. Incorporating a speech decoder in our framework, we introduce the first results for zero-shot direct speech-to-speech and text-to-speech translation.
null
null
10.18653/v1/2022.emnlp-main.391
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,521
inproceedings
mishra-etal-2022-lila
{LILA}: A Unified Benchmark for Mathematical Reasoning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.392/
Mishra, Swaroop and Finlayson, Matthew and Lu, Pan and Tang, Leonard and Welleck, Sean and Baral, Chitta and Rajpurohit, Tanmay and Tafjord, Oyvind and Sabharwal, Ashish and Clark, Peter and Kalyan, Ashwin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5807--5832
Mathematical reasoning skills are essential for general-purpose intelligentsystems to perform tasks from grocery shopping to climate modeling.Towards evaluating and improving AI systems in this domain, we proposeLILA, a unified mathematical reasoning benchmark consisting of 23 diversetasks along four dimensions:(i) mathematical abilities e.g., arithmetic, calculus (ii) language format e.g., question-answering, fill-in-the-blanks (iii) language diversity e.g., no language, simple language (iv) external knowledge e.g., commonsense, physics. We construct our benchmark by extending 20 datasets benchmark by collecting task instructions and solutions in the form of Python programs,thereby obtaining explainable solutions in addition to the correct answer.We additionally introduce two evaluation datasets to measure out-of-distribution performance and robustness to language perturbation.Finally, we introduce BHASKARA,a general-purpose mathematical reasoning model trained on LILA. Importantly, we find that multi-tasking leads to significant improvements (average relative improvement of 21.83{\%} F1 score vs. single-task models),while the best performing model only obtains 60.40{\%},indicating the room for improvement in general mathematical reasoning and understanding.
null
null
10.18653/v1/2022.emnlp-main.392
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,522
inproceedings
hossain-blanco-2022-leveraging
Leveraging Affirmative Interpretations from Negation Improves Natural Language Understanding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.393/
Hossain, Md Mosharaf and Blanco, Eduardo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5833--5847
Negation poses a challenge in many natural language understanding tasks. Inspired by the fact that understanding a negated statement often requires humans to infer affirmative interpretations, in this paper we show that doing so benefits models for three natural language understanding tasks. We present an automated procedure to collect pairs of sentences with negation and their affirmative interpretations, resulting in over 150,000 pairs. Experimental results show that leveraging these pairs helps (a) T5 generate affirmative interpretations from negations in a previous benchmark, and (b) a RoBERTa-based classifier solve the task of natural language inference. We also leverage our pairs to build a plug-and-play neural generator that given a negated statement generates an affirmative interpretation. Then, we incorporate the pretrained generator into a RoBERTa-based classifier for sentiment analysis and show that doing so improves the results. Crucially, our proposal does not require any manual effort.
null
null
10.18653/v1/2022.emnlp-main.393
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,523
inproceedings
nie-etal-2022-graphq
{G}raph{Q} {IR}: Unifying the Semantic Parsing of Graph Query Languages with One Intermediate Representation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.394/
Nie, Lunyiu and Cao, Shulin and Shi, Jiaxin and Sun, Jiuding and Tian, Qi and Hou, Lei and Li, Juanzi and Zhai, Jidong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5848--5865
Subject to the huge semantic gap between natural and formal languages, neural semantic parsing is typically bottlenecked by its complexity of dealing with both input semantics and output syntax. Recent works have proposed several forms of supplementary supervision but none is generalized across multiple formal languages. This paper proposes a unified intermediate representation for graph query languages, named GraphQ IR. It has a natural-language-like expression that bridges the semantic gap and formally defined syntax that maintains the graph structure. Therefore, a neural semantic parser can more precisely convert user queries into GraphQ IR, which can be later losslessly compiled into various downstream graph query languages. Extensive experiments on several benchmarks including KQA Pro, Overnight, GrailQA, and MetaQA-Cypher under the standard i.i.d., out-of-distribution, and low-resource settings validate GraphQ IR`s superiority over the previous state-of-the-arts with a maximum 11{\%} accuracy improvement.
null
null
10.18653/v1/2022.emnlp-main.394
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,524
inproceedings
sadeq-etal-2022-informask
{I}nfor{M}ask: Unsupervised Informative Masking for Language Model Pretraining
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.395/
Sadeq, Nafis and Xu, Canwen and McAuley, Julian
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5866--5878
Masked language modeling is widely used for pretraining large language models for natural language understanding (NLU). However, random masking is suboptimal, allocating an equal masking rate for all tokens. In this paper, we propose InforMask, a new unsupervised masking strategy for training masked language models. InforMask exploits Pointwise Mutual Information (PMI) to select the most informative tokens to mask. We further propose two optimizations for InforMask to improve its efficiency. With a one-off preprocessing step, InforMask outperforms random masking and previously proposed masking strategies on the factual recall benchmark LAMA and the question answering benchmark SQuAD v1 and v2.
null
null
10.18653/v1/2022.emnlp-main.395
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,525
inproceedings
he-etal-2022-ctrlsum
{CTRL}sum: Towards Generic Controllable Text Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.396/
He, Junxian and Kryscinski, Wojciech and McCann, Bryan and Rajani, Nazneen and Xiong, Caiming
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5879--5915
Current summarization systems yield generic summaries that are disconnected from users' preferences and expectations. To address this limitation, we present CTRLsum, a generic framework to control generated summaries through a set of keywords. During training keywords are extracted automatically without requiring additional human annotations. At test time CTRLsum features a control function to map control signal to keywords; through engineering the control function, the same trained model is able to be applied to control summaries on various dimensions, while neither affecting the model training process nor the pretrained models. We additionally explore the combination of keywords and text prompts for more control tasks. Experiments demonstrate the effectiveness of CTRLsum on three domains of summarization datasets and five control tasks: (1) entity-centric and (2) length-controllable summarization, (3) contribution summarization on scientific papers, (4) invention purpose summarization on patent filings, and (5) question-guided summarization on news articles. Moreover, when used in a standard, unconstrained summarization setting, CTRLsum is comparable or better than strong pretrained systems.
null
null
10.18653/v1/2022.emnlp-main.396
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,526
inproceedings
glockner-etal-2022-missing
Missing Counter-Evidence Renders {NLP} Fact-Checking Unrealistic for Misinformation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.397/
Glockner, Max and Hou, Yufang and Gurevych, Iryna
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5916--5936
Misinformation emerges in times of uncertainty when credible information is limited. This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available. Despite increasing interest in automatic fact-checking, it is still unclear if automated approaches can realistically refute harmful real-world misinformation. Here, we contrast and compare NLP fact-checking with how professional fact-checkers combat misinformation in the absence of counter-evidence. In our analysis, we show that, by design, existing NLP task definitions for fact-checking cannot refute misinformation as professional fact-checkers do for the majority of claims. We then define two requirements that the evidence in datasets must fulfill for realistic fact-checking: It must be (1) sufficient to refute the claim and (2) not leaked from existing fact-checking articles. We survey existing fact-checking datasets and find that all of them fail to satisfy both criteria. Finally, we perform experiments to demonstrate that models trained on a large-scale fact-checking dataset rely on leaked evidence, which makes them unsuitable in real-world scenarios. Taken together, we show that current NLP fact-checking cannot realistically combat real-world misinformation because it depends on unrealistic assumptions about counter-evidence in the data.
null
null
10.18653/v1/2022.emnlp-main.397
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,527
inproceedings
lovelace-rose-2022-framework
A Framework for Adapting Pre-Trained Language Models to Knowledge Graph Completion
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.398/
Lovelace, Justin and Ros{\'e}, Carolyn
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5937--5955
Recent work has demonstrated that entity representations can be extracted from pre-trained language models to develop knowledge graph completion models that are more robust to the naturally occurring sparsity found in knowledge graphs. In this work, we conduct a comprehensive exploration of how to best extract and incorporate those embeddings into knowledge graph completion models. We explore the suitability of the extracted embeddings for direct use in entity ranking and introduce both unsupervised and supervised processing methods that can lead to improved downstream performance. We then introduce supervised embedding extraction methods that can extract more informative representations. We then synthesize our findings and develop a knowledge graph completion model that significantly outperforms recent neural models.
null
null
10.18653/v1/2022.emnlp-main.398
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,528
inproceedings
van-der-poel-etal-2022-mutual
Mutual Information Alleviates Hallucinations in Abstractive Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.399/
van der Poel, Liam and Cotterell, Ryan and Meister, Clara
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5956--5965
Despite significant progress in the quality of language generated from abstractive summarization models, these models still exhibit the tendency to hallucinate, i.e., output content not supported by the source document. A number of works have tried to fix{---}or at least uncover the source of{---}the problem with limited success. In this paper, we identify a simple criterion under which models are significantly more likely to assign more probability to hallucinated content during generation: high model uncertainty. This finding offers a potential explanation for hallucinations: models default to favoring text with high marginal probability, i.e., high-frequency occurrences in the training set, when uncertain about a continuation. It also motivates possible routes for real-time intervention during decoding to prevent such hallucinations. We propose a decoding strategy that switches to optimizing for pointwise mutual information of the source and target token{---}rather than purely the probability of the target token{---}when the model exhibits uncertainty. Experiments on the dataset show that our method decreases the probability of hallucinated tokens while maintaining the Rouge and BERT-S scores of top-performing decoding strategies.
null
null
10.18653/v1/2022.emnlp-main.399
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,529
inproceedings
feng-etal-2022-toward
Toward the Limitation of Code-Switching in Cross-Lingual Transfer
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.400/
Feng, Yukun and Li, Feng and Koehn, Philipp
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5966--5971
Multilingual pretrained models have shown strong cross-lingual transfer ability. Some works used code-switching sentences, which consist of tokens from multiple languages, to enhance the cross-lingual representation further, and have shown success in many zero-shot cross-lingual tasks. However, code-switched tokens are likely to cause grammatical incoherence in newly substituted sentences, and negatively affect the performance on token-sensitive tasks, such as Part-of-Speech (POS) tagging and Named-Entity-Recognition (NER). This paper mitigates the limitation of the code-switching method by not only making the token replacement but considering the similarity between the context and the switched tokens so that the newly substituted sentences are grammatically consistent during both training and inference. We conduct experiments on cross-lingual POS and NER over 30+ languages, and demonstrate the effectiveness of our method by outperforming the mBERT by 0.95 and original code-switching method by 1.67 on F1 scores.
null
null
10.18653/v1/2022.emnlp-main.400
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,530
inproceedings
mtumbuka-lukasiewicz-2022-syntactically
Syntactically Rich Discriminative Training: An Effective Method for Open Information Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.401/
Mtumbuka, Frank and Lukasiewicz, Thomas
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5972--5987
Open information extraction (OIE) is the task of extracting facts ''(Subject, Relation, Object){\textquotedblright} from natural language text. We propose several new methods for training neural OIE models in this paper. First, we propose a novel method for computing syntactically rich text embeddings using the structure of dependency trees. Second, we propose a new discriminative training approach to OIE in which tokens in the generated fact are classified as {\textquotedblleft}real{\textquotedblright} or {\textquotedblleft}fake{\textquotedblright}, i.e., those tokens that are in both the generated and gold tuples, and those that are only in the generated tuple but not in the gold tuple. We also address the issue of repetitive tokens in generated facts and improve the models' ability to generate implicit facts. Our approach reduces repetitive tokens by a factor of 23{\%}. Finally, we present paraphrased versions of the CaRB, OIE2016, and LSOIE datasets, and show that the models' performance substantially improves when trained on augmented datasets. Our best model beats the SOTA of IMoJIE on the recent CaRB dataset, with an improvement of 39.63{\%} in F1 score.
null
null
10.18653/v1/2022.emnlp-main.401
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,531
inproceedings
hu-etal-2022-transformer
Transformer-based Entity Typing in Knowledge Graphs
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.402/
Hu, Zhiwei and Gutierrez-Basulto, Victor and Xiang, Zhiliang and Li, Ru and Pan, Jeff
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
5988--6001
We investigate the knowledge graph entity typing task which aims at inferring plausible entity types. In this paper, we propose a novel Transformer-based Entity Typing (TET) approach, effectively encoding the content of neighbours of an entity by means of a transformer mechanism. More precisely, TET is composed of three different mechanisms: a local transformer allowing to infer missing entity types by independently encoding the information provided by each of its neighbours; a global transformer aggregating the information of all neighbours of an entity into a single long sequence to reason about more complex entity types; and a context transformer integrating neighbours content in a differentiated way through information exchange between neighbour pairs, while preserving the graph structure. Furthermore, TET uses information about class membership of types to semantically strengthen the representation of an entity. Experiments on two real-world datasets demonstrate the superior performance of TET compared to the state-of-the-art.
null
null
10.18653/v1/2022.emnlp-main.402
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,532
inproceedings
gangi-reddy-etal-2022-newsclaims
{N}ews{C}laims: A New Benchmark for Claim Detection from News with Attribute Knowledge
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.403/
Gangi Reddy, Revanth and Chinthakindi, Sai Chetan and Wang, Zhenhailong and Fung, Yi and Conger, Kathryn and ELsayed, Ahmed and Palmer, Martha and Nakov, Preslav and Hovy, Eduard and Small, Kevin and Ji, Heng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6002--6018
Claim detection and verification are crucial for news understanding and have emerged as promising technologies for mitigating misinformation and disinformation in the news. However, most existing work has focused on claim sentence analysis while overlooking additional crucial attributes (e.g., the claimer and the main object associated with the claim).In this work, we present NewsClaims, a new benchmark for attribute-aware claim detection in the news domain. We extend the claim detection problem to include extraction of additional attributes related to each claim and release 889 claims annotated over 143 news articles. NewsClaims aims to benchmark claim detection systems in emerging scenarios, comprising unseen topics with little or no training data. To this end, we see that zero-shot and prompt-based baselines show promising performance on this benchmark, while still considerably behind human performance.
null
null
10.18653/v1/2022.emnlp-main.403
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,533
inproceedings
marchisio-etal-2022-isovec
{I}so{V}ec: Controlling the Relative Isomorphism of Word Embedding Spaces
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.404/
Marchisio, Kelly and Verma, Neha and Duh, Kevin and Koehn, Philipp
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6019--6033
The ability to extract high-quality translation dictionaries from monolingual word embedding spaces depends critically on the geometric similarity of the spaces{---}their degree of {\textquotedblleft}isomorphism.{\textquotedblright} We address the root-cause of faulty cross-lingual mapping: that word embedding training resulted in the underlying spaces being non-isomorphic. We incorporate global measures of isomorphism directly into the skipgram loss function, successfully increasing the relative isomorphism of trained word embedding spaces and improving their ability to be mapped to a shared cross-lingual space. The result is improved bilingual lexicon induction in general data conditions, under domain mismatch, and with training algorithm dissimilarities. We release IsoVec at https://github.com/kellymarchisio/isovec.
null
null
10.18653/v1/2022.emnlp-main.404
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,534
inproceedings
ravfogel-etal-2022-adversarial
Adversarial Concept Erasure in Kernel Space
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.405/
Ravfogel, Shauli and Vargas, Francisco and Goldberg, Yoav and Cotterell, Ryan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6034--6055
The representation space of neural models for textual data emerges in an unsupervised manner during training. Understanding how human-interpretable concepts, such as gender, are encoded in these representations would improve the ability of users to control the content of these representations and analyze the working of the models that rely on them. One prominent approach to the control problem is the identification and removal of linear concept subspaces {--} subspaces in the representation space that correspond to a given concept. While those are tractable and interpretable, neural network do not necessarily represent concepts in linear subspaces. We propose a kernelization of the recently-proposed linear concept-removal objective, and show that it is effective in guarding against the ability of certain nonlinear adversaries to recover the concept. Interestingly, our findings suggest that the division between linear and nonlinear models is overly simplistic: when considering the concept of binary gender and its neutralization, we do not find a single kernel space that exclusively contains all the concept-related information. It is therefore challenging to protect against all nonlinear adversaries at once.
null
null
10.18653/v1/2022.emnlp-main.405
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,535
inproceedings
ethayarajh-jurafsky-2022-authenticity
The Authenticity Gap in Human Evaluation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.406/
Ethayarajh, Kawin and Jurafsky, Dan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6056--6070
Human ratings are the gold standard in NLG evaluation. The standard protocol is to collect ratings of generated text, average across annotators, and rank NLG systems by their average scores. However, little consideration has been given as to whether this approach faithfully captures human preferences. Analyzing this standard protocol through the lens of utility theory in economics, we identify the implicit assumptions it makes about annotators. These assumptions are often violated in practice, in which case annotator ratings cease to reflect their preferences. The most egregious violations come from using Likert scales, which provably reverse the direction of the true preference in certain cases. We suggest improvements to the standard protocol to make it more theoretically sound, but even in its improved form, it cannot be used to evaluate open-ended tasks like story generation. For the latter, we propose a new human evaluation protocol called system-level probabilistic assessment (SPA). When human evaluation of stories is done with SPA, we can recover the ordering of GPT-3 models by size, with statistically significant results. However, when human evaluation is done with the standard protocol, less than half of the expected preferences can be recovered (e.g., there is no significant difference between curie and davinci, despite using a highly powered test).
null
null
10.18653/v1/2022.emnlp-main.406
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,536
inproceedings
yamshchikov-etal-2022-bert
{BERT} in Plutarch`s Shadows
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.407/
Yamshchikov, Ivan P. and Tikhonov, Alexey and Pantis, Yorgos and Schubert, Charlotte and Jost, J{\"urgen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6071--6080
The extensive surviving corpus of the ancient scholar Plutarch of Chaeronea (ca. 45-120 CE) also contains several texts which, according to current scholarly opinion, did not originate with him and are therefore attributed to an anonymous author Pseudo-Plutarch. These include, in particular, the work Placita Philosophorum (Quotations and Opinions of the Ancient Philosophers), which is extremely important for the history of ancient philosophy. Little is known about the identity of that anonymous author and its relation to other authors from the same period. This paper presents a BERT language model for Ancient Greek. The model discovers previously unknown statistical properties relevant to these literary, philosophical, and historical problems and can shed new light on this authorship question. In particular, the Placita Philosophorum, together with one of the other Pseudo-Plutarch texts, shows similarities with the texts written by authors from an Alexandrian context (2nd/3rd century CE).
null
null
10.18653/v1/2022.emnlp-main.407
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,537
inproceedings
liu-etal-2022-leveraging-locality
Leveraging Locality in Abstractive Text Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.408/
Liu, Yixin and Ni, Ansong and Nan, Linyong and Deb, Budhaditya and Zhu, Chenguang and Awadallah, Ahmed Hassan and Radev, Dragomir
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6081--6093
Neural attention models have achieved significant improvements on many natural language processing tasks. However, the quadratic memory complexity of the self-attention module with respect to the input length hinders their applications in long text summarization. Instead of designing more efficient attention modules, we approach this problem by investigating if models with a restricted context can have competitive performance compared with the memory-efficient attention models that maintain a global context by treating the input as a single sequence. Our model is applied to individual pages, which contain parts of inputs grouped by the principle of locality, during both the encoding and decoding stages. We empirically investigated three kinds of locality in text summarization at different levels of granularity, ranging from sentences to documents. Our experimental results show that our model has a better performance compared with strong baseline models with efficient attention modules, and our analysis provides further insights into our locality-aware modeling strategy.
null
null
10.18653/v1/2022.emnlp-main.408
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,538
inproceedings
wang-etal-2022-salience
Salience Allocation as Guidance for Abstractive Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.409/
Wang, Fei and Song, Kaiqiang and Zhang, Hongming and Jin, Lifeng and Cho, Sangwoo and Yao, Wenlin and Wang, Xiaoyang and Chen, Muhao and Yu, Dong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6094--6106
Abstractive summarization models typically learn to capture the salient information from scratch implicitly.Recent literature adds extractive summaries as guidance for abstractive summarization models to provide hints of salient content and achieves better performance.However, extractive summaries as guidance could be over strict, leading to information loss or noisy signals.Furthermore, it cannot easily adapt to documents with various abstractiveness.As the number and allocation of salience content pieces varies, it is hard to find a fixed threshold deciding which content should be included in the guidance.In this paper, we propose a novel summarization approach with a flexible and reliable salience guidance, namely SEASON (SaliencE Allocation as Guidance for Abstractive SummarizatiON).SEASON utilizes the allocation of salience expectation to guide abstractive summarization and adapts well to articles in different abstractiveness.Automatic and human evaluations on two benchmark datasets show that the proposed method is effective and reliable.Empirical results on more than one million news articles demonstrate a natural fifteen-fifty salience split for news article sentences, providing a useful insight for composing news articles.
null
null
10.18653/v1/2022.emnlp-main.409
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,539
inproceedings
scialom-etal-2022-fine
Fine-tuned Language Models are Continual Learners
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.410/
Scialom, Thomas and Chakrabarty, Tuhin and Muresan, Smaranda
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6107--6122
Recent work on large language models relies on the intuition that most natural language processing tasks can be described via natural language instructions and that models trained on these instructions show strong zero-shot performance on several standard datasets. However, these models even though impressive still perform poorly on a wide range of tasks outside of their respective training and evaluation sets.To address this limitation, we argue that a model should be able to keep extending its knowledge and abilities, without forgetting previous skills. In spite of the limited success of Continual Learning, we show that \textit{Fine-tuned Language Models can be continual learners}.We empirically investigate the reason for this success and conclude that Continual Learning emerges from self-supervision pre-training. Our resulting model Continual-T0 (CT0) is able to learn 8 new diverse language generation tasks, while still maintaining good performance on previous tasks, spanning in total of 70 datasets. Finally, we show that CT0 is able to combine instructions in ways it was never trained for, demonstrating some level of instruction compositionality.
null
null
10.18653/v1/2022.emnlp-main.410
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,540
inproceedings
aly-vlachos-2022-natural
Natural Logic-guided Autoregressive Multi-hop Document Retrieval for Fact Verification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.411/
Aly, Rami and Vlachos, Andreas
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6123--6135
A key component of fact verification is the evidence retrieval, often from multiple documents. Recent approaches use dense representations and condition the retrieval of each document on the previously retrieved ones. The latter step is performed over all the documents in the collection, requiring storing their dense representations in an index, thus incurring a high memory footprint. An alternative paradigm is retrieve-and-rerank, where documents are retrieved using methods such as BM25, their sentences are reranked, and further documents are retrieved conditioned on these sentences, reducing the memory requirements. However, such approaches can be brittle as they rely on heuristics and assume hyperlinks between documents.We propose a novel retrieve-and-rerank method for multi-hop retrieval, that consists of a retriever that jointly scores documents in the knowledge source and sentences from previously retrieved documents using an autoregressive formulation and is guided by a proof system based on natural logic that dynamically terminates the retrieval process if the evidence is deemed sufficient.This method exceeds or is on par with the current state-of-the-art on FEVER, HoVer and FEVEROUS-S, while using 5 to 10 times less memory than competing systems. Evaluation on an adversarial dataset indicates improved stability of our approach compared to commonly deployed threshold-based methods. Finally, the proof system helps humans predict model decisions correctly more often than using the evidence alone.
null
null
10.18653/v1/2022.emnlp-main.411
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,541
inproceedings
kamila-etal-2022-ax
{AX}-{MABSA}: A Framework for Extremely Weakly Supervised Multi-label Aspect Based Sentiment Analysis
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.412/
Kamila, Sabyasachi and Magdy, Walid and Dutta, Sourav and Wang, MingXue
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6136--6147
Aspect Based Sentiment Analysis is a dominant research area with potential applications in social media analytics, business, finance, and health. Prior works in this area are primarily based on supervised methods, with a few techniques using weak supervision limited to predicting a single aspect category per review sentence. In this paper, we present an extremely weakly supervised multi-label Aspect Category Sentiment Analysis framework which does not use any labelled data. We only rely on a single word per class as an initial indicative information. We further propose an automatic word selection technique to choose these seed categories and sentiment words. We explore unsupervised language model post-training to improve the overall performance, and propose a multi-label generator model to generate multiple aspect category-sentiment pairs per review sentence. Experiments conducted on four benchmark datasets showcase our method to outperform other weakly supervised baselines by a significant margin.
null
null
10.18653/v1/2022.emnlp-main.412
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,542
inproceedings
mirzaee-kordjamshidi-2022-transfer
Transfer Learning with Synthetic Corpora for Spatial Role Labeling and Reasoning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.413/
Mirzaee, Roshanak and Kordjamshidi, Parisa
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6148--6165
Recent research shows synthetic data as a source of supervision helps pretrained language models (PLM) transfer learning to new target tasks/domains. However, this idea is less explored for spatial language. We provide two new data resources on multiple spatial language processing tasks. The first dataset is synthesized for transfer learning on spatial question answering (SQA) and spatial role labeling (SpRL). Compared to previous SQA datasets, we include a larger variety of spatial relation types and spatial expressions. Our data generation process is easily extendable with new spatial expression lexicons. The second one is a real-world SQA dataset with human-generated questions built on an existing corpus with SPRL annotations. This dataset can be used to evaluate spatial language processing models in realistic situations. We show pretraining with automatically generated data significantly improves the SOTA results on several SQA and SPRL benchmarks, particularly when the training data in the target domain is small.
null
null
10.18653/v1/2022.emnlp-main.413
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,543
inproceedings
zhang-etal-2022-survey
A Survey of Active Learning for Natural Language Processing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.414/
Zhang, Zhisong and Strubell, Emma and Hovy, Eduard
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6166--6190
In this work, we provide a literature review of active learning (AL) for its applications in natural language processing (NLP). In addition to a fine-grained categorization of query strategies, we also investigate several other important aspects of applying AL to NLP problems. These include AL for structured prediction tasks, annotation cost, model learning (especially with deep neural models), and starting and stopping AL. Finally, we conclude with a discussion of related topics and future directions.
null
null
10.18653/v1/2022.emnlp-main.414
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,544
inproceedings
delucia-etal-2022-bernice
Bernice: A Multilingual Pre-trained Encoder for {T}witter
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.415/
DeLucia, Alexandra and Wu, Shijie and Mueller, Aaron and Aguirre, Carlos and Resnik, Philip and Dredze, Mark
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6191--6205
The language of Twitter differs significantly from that of other domains commonly included in large language model training. While tweets are typically multilingual and contain informal language, including emoji and hashtags, most pre-trained language models for Twitter are either monolingual, adapted from other domains rather than trained exclusively on Twitter, or are trained on a limited amount of in-domain Twitter data.We introduce Bernice, the first multilingual RoBERTa language model trained from scratch on 2.5 billion tweets with a custom tweet-focused tokenizer. We evaluate on a variety of monolingual and multilingual Twitter benchmarks, finding that our model consistently exceeds or matches the performance of a variety of models adapted to social media data as well as strong multilingual baselines, despite being trained on less data overall.We posit that it is more efficient compute- and data-wise to train completely on in-domain data with a specialized domain-specific tokenizer.
null
null
10.18653/v1/2022.emnlp-main.415
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,545
inproceedings
arase-etal-2022-cefr
{CEFR}-Based Sentence Difficulty Annotation and Assessment
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.416/
Arase, Yuki and Uchida, Satoru and Kajiwara, Tomoyuki
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6206--6219
Controllable text simplification is a crucial assistive technique for language learning and teaching. One of the primary factors hindering its advancement is the lack of a corpus annotated with sentence difficulty levels based on language ability descriptions. To address this problem, we created the CEFR-based Sentence Profile (CEFR-SP) corpus, containing 17k English sentences annotated with the levels based on the Common European Framework of Reference for Languages assigned by English-education professionals. In addition, we propose a sentence-level assessment model to handle unbalanced level distribution because the most basic and highly proficient sentences are naturally scarce. In the experiments in this study, our method achieved a macro-F1 score of 84.5{\%} in the level assessment, thus outperforming strong baselines employed in readability assessment.
null
null
10.18653/v1/2022.emnlp-main.416
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,546
inproceedings
kim-etal-2022-simple
Simple Questions Generate Named Entity Recognition Datasets
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.417/
Kim, Hyunjae and Yoo, Jaehyo and Yoon, Seunghyun and Lee, Jinhyuk and Kang, Jaewoo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6220--6236
Recent named entity recognition (NER) models often rely on human-annotated datasets requiring the vast engagement of professional knowledge on the target domain and entities. This work introduces an ask-to-generate approach, which automatically generates NER datasets by asking simple natural language questions to an open-domain question answering system (e.g., {\textquotedblleft}Which disease?{\textquotedblright}). Despite using fewer training resources, our models solely trained on the generated datasets largely outperform strong low-resource models by 19.5 F1 score across six popular NER benchmarks. Our models also show competitive performance with rich-resource models that additionally leverage in-domain dictionaries provided by domain experts. In few-shot NER, we outperform the previous best model by 5.2 F1 score on three benchmarks and achieve new state-of-the-art performance.
null
null
10.18653/v1/2022.emnlp-main.417
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,547
inproceedings
jang-etal-2022-temporalwiki
{T}emporal{W}iki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.418/
Jang, Joel and Ye, Seonghyeon and Lee, Changho and Yang, Sohee and Shin, Joongbo and Han, Janghoon and Kim, Gyeonghun and Seo, Minjoon
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6237--6250
Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment. This is especially a challenging problem because the research community still lacks a coherent dataset for assessing the adaptability of LMs to frequently-updated knowledge corpus such as Wikipedia. To this end, we introduce TemporalWiki, a lifelong benchmark for ever-evolving LMs that utilizes the difference between consecutive snapshots of English Wikipedia and English Wikidata for training and evaluation, respectively. The benchmark hence allows researchers to periodically track an LM`s ability to retain previous knowledge and acquire updated/new knowledge at each point in time. We also find that training an LM on the diff data through continual learning methods achieves similar or better perplexity than on the entire snapshot in our benchmark with 12 times less computational cost, which verifies that factual knowledge in LMs can be safely updated with minimal training data via continual learning.
null
null
10.18653/v1/2022.emnlp-main.418
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,548
inproceedings
dai-etal-2022-bi
Bi-Directional Iterative Prompt-Tuning for Event Argument Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.419/
Dai, Lu and Wang, Bang and Xiang, Wei and Mo, Yijun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6251--6263
Recently, prompt-tuning has attracted growing interests in event argument extraction (EAE). However, the existing prompt-tuning methods have not achieved satisfactory performance due to the lack of consideration of entity information. In this paper, we propose a bi-directional iterative prompt-tuning method for EAE, where the EAE task is treated as a cloze-style task to take full advantage of entity information and pre-trained language models (PLMs). Furthermore, our method explores event argument interactions by introducing the argument roles of contextual entities into prompt construction. Since template and verbalizer are two crucial components in a cloze-style prompt, we propose to utilize the role label semantic knowledge to construct a semantic verbalizer and design three kind of templates for the EAE task. Experiments on the ACE 2005 English dataset with standard and low-resource settings show that the proposed method significantly outperforms the peer state-of-the-art methods.
null
null
10.18653/v1/2022.emnlp-main.419
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,549
inproceedings
wang-etal-2022-learning-robust
Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.420/
Wang, Peiyi and Song, Yifan and Liu, Tianyu and Lin, Binghuai and Cao, Yunbo and Li, Sujian and Sui, Zhifang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6264--6278
Continual relation extraction (CRE) aims to continually learn new relations from a class-incremental data stream. CRE model usually suffers from catastrophic forgetting problem, i.e., the performance of old relations seriously degrades when the model learns new relations. Most previous work attributes catastrophic forgetting to the corruption of the learned representations as new relations come, with an implicit assumption that the CRE models have adequately learned the old relations. In this paper, through empirical studies we argue that this assumption may not hold, and an important reason for catastrophic forgetting is that the learned representations do not have good robustness against the appearance of analogous relations in the subsequent learning process. To address this issue, we encourage the model to learn more precise and robust representations through a simple yet effective adversarial class augmentation mechanism (ACA), which is easy to implement and model-agnostic.Experimental results show that ACA can consistently improve the performance of state-of-the-art CRE models on two popular benchmarks.
null
null
10.18653/v1/2022.emnlp-main.420
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,550
inproceedings
chen-etal-2022-convfinqa
{C}onv{F}in{QA}: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.421/
Chen, Zhiyu and Li, Shiyang and Smiley, Charese and Ma, Zhiqiang and Shah, Sameena and Wang, William Yang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6279--6292
With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching. The community is experiencing the shift of the challenge from how to model language to the imitation of complex reasoning abilities like human beings. In this work, we investigate the application domain of finance that involves real-world, complex numerical reasoning. We propose a new large-scale dataset, ConvFinQA, aiming to study the chain of numerical reasoning in conversational question answering. Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations. We conduct comprehensive experiments and analyses with both the neural symbolic methods and the prompting-based methods, to provide insights into the reasoning mechanisms of these two divisions. We believe our new dataset should serve as a valuable resource to push forward the exploration of real-world, complex reasoning tasks as the next research focus. Our dataset and code is publicly available at https://github.com/czyssrs/ConvFinQA.
null
null
10.18653/v1/2022.emnlp-main.421
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,551
inproceedings
zhou-etal-2022-span
A Span-based Multimodal Variational Autoencoder for Semi-supervised Multimodal Named Entity Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.422/
Zhou, Baohang and Zhang, Ying and Song, Kehui and Guo, Wenya and Zhao, Guoqing and Wang, Hongbin and Yuan, Xiaojie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6293--6302
Multimodal named entity recognition (MNER) on social media is a challenging task which aims to extract named entities in free text and incorporate images to classify them into user-defined types. However, the annotation for named entities on social media demands a mount of human efforts. The existing semi-supervised named entity recognition methods focus on the text modal and are utilized to reduce labeling costs in traditional NER. However, the previous methods are not efficient for semi-supervised MNER. Because the MNER task is defined to combine the text information with image one and needs to consider the mismatch between the posted text and image. To fuse the text and image features for MNER effectively under semi-supervised setting, we propose a novel span-based multimodal variational autoencoder (SMVAE) model for semi-supervised MNER. The proposed method exploits modal-specific VAEs to model text and image latent features, and utilizes product-of-experts to acquire multimodal features. In our approach, the implicit relations between labels and multimodal features are modeled by multimodal VAE. Thus, the useful information of unlabeled data can be exploited in our method under semi-supervised setting. Experimental results on two benchmark datasets demonstrate that our approach not only outperforms baselines under supervised setting, but also improves MNER performance with less labeled data than existing semi-supervised methods.
null
null
10.18653/v1/2022.emnlp-main.422
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,552
inproceedings
lin-cheng-2022-r
{R}-{T}ea{F}or: Regularized Teacher-Forcing for Abstractive Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.423/
Lin, Guan-Yu and Cheng, Pu-Jen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6303--6311
Teacher-forcing is widely used in training sequence generation models to improve sampling efficiency and to stabilize training. However, teacher-forcing is vulnerable to the exposure bias problem. Previous works have attempted to address exposure bias by modifying the training data to simulate model-generated results. Nevertheless, they do not consider the pairwise relationship between the original training data and the modified ones, which provides more information during training. Hence, we propose Regularized Teacher-Forcing (R-TeaFor) to utilize this relationship for better regularization. Empirically, our experiments show that R-TeaFor outperforms previous summarization state-of-the-art models, and the results can be generalized to different pre-trained models.
null
null
10.18653/v1/2022.emnlp-main.423
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,553
inproceedings
lyu-etal-2022-modeling
Modeling Consistency Preference via Lexical Chains for Document-level Neural Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.424/
Lyu, Xinglin and Li, Junhui and Tao, Shimin and Yang, Hao and Qin, Ying and Zhang, Min
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6312--6326
In this paper we aim to relieve the issue of lexical translation inconsistency for document-level neural machine translation (NMT) by modeling consistency preference for lexical chains, which consist of repeated words in a source-side document and provide a representation of the lexical consistency structure of the document. Specifically, we first propose lexical-consistency attention to capture consistency context among words in the same lexical chains. Then for each lexical chain we define and learn a consistency-tailored latent variable, which will guide the translation of corresponding sentences to enhance lexical translation consistency. Experimental results on Chinese{\textrightarrow}English and French{\textrightarrow}English document-level translation tasks show that our approach not only significantly improves translation performance in BLEU, but also substantially alleviates the problem of the lexical translation inconsistency.
null
null
10.18653/v1/2022.emnlp-main.424
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,554
inproceedings
shi-etal-2022-just
Just Fine-tune Twice: Selective Differential Privacy for Large Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.425/
Shi, Weiyan and Shea, Ryan and Chen, Si and Zhang, Chiyuan and Jia, Ruoxi and Yu, Zhou
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6327--6340
Protecting large language models from privacy leakage is becoming increasingly crucial with their wide adoption in real-world products. Yet applying *differential privacy* (DP), a canonical notion with provable privacy guarantees for machine learning models, to those models remains challenging due to the trade-off between model utility and privacy loss. Utilizing the fact that sensitive information in language data tends to be sparse, Shi et al. (2021) formalized a DP notion extension called *Selective Differential Privacy* (SDP) to protect only the sensitive tokens defined by a policy function. However, their algorithm only works for RNN-based models. In this paper, we develop a novel framework, *Just Fine-tune Twice* (JFT), that achieves SDP for state-of-the-art large transformer-based models. Our method is easy to implement: it first fine-tunes the model with *redacted* in-domain data, and then fine-tunes it again with the *original* in-domain data using a private training mechanism. Furthermore, we study the scenario of imperfect implementation of policy functions that misses sensitive tokens and develop systematic methods to handle it. Experiments show that our method achieves strong utility compared to previous baselines. We also analyze the SDP privacy guarantee empirically with the canary insertion attack.
null
null
10.18653/v1/2022.emnlp-main.425
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,555
inproceedings
fonseca-etal-2022-factorizing
Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.426/
Fonseca, Marcio and Ziser, Yftah and Cohen, Shay B.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6341--6364
We argue that disentangling content selection from the budget used to cover salient content improves the performance and applicability of abstractive summarizers. Our method, FactorSum, does this disentanglement by factorizing summarization into two steps through an energy function: (1) generation of abstractive summary views covering salient information in subsets of the input document (document views); (2) combination of these views into a final summary, following a budget and content guidance. This guidance may come from different sources, including from an advisor model such as BART or BigBird, or in oracle mode {--} from the reference. This factorization achieves significantly higher ROUGE scores on multiple benchmarks for long document summarization, namely PubMed, arXiv, and GovReport. Most notably, our model is effective for domain adaptation. When trained only on PubMed samples, it achieves a 46.29 ROUGE-1 score on arXiv, outperforming PEGASUS trained in domain by a large margin. Our experimental results indicate that the performance gains are due to more flexible budget adaptation and processing of shorter contexts provided by partial document views.
null
null
10.18653/v1/2022.emnlp-main.426
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,556
inproceedings
shi-etal-2022-open
Open-Domain Sign Language Translation Learned from Online Video
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.427/
Shi, Bowen and Brentari, Diane and Shakhnarovich, Gregory and Livescu, Karen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6365--6379
Existing work on sign language translation {--} that is, translation from sign language videos into sentences in a written language {--} has focused mainly on (1) data collected in a controlled environment or (2) data in a specific domain, which limits the applicability to real-world settings. In this paper, we introduce OpenASL, a large-scale American Sign Language (ASL) - English dataset collected from online video sites (e.g., YouTube).OpenASL contains 288 hours of ASL videos in multiple domains from over 200 signers and is the largest publicly available ASL translation dataset to date. To tackle the challenges of sign language translation in realistic settings and without glosses, we propose a set of techniques including sign search as a pretext task for pre-training and fusion of mouthing and handshape features. The proposed techniques produce consistent and large improvements in translation quality, over baseline models basedon prior work.
null
null
10.18653/v1/2022.emnlp-main.427
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,557
inproceedings
su-etal-2022-improving
Improving Temporal Generalization of Pre-trained Language Models with Lexical Semantic Change
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.428/
Su, Zhaochen and Tang, Zecheng and Guan, Xinyan and Wu, Lijun and Zhang, Min and Li, Juntao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6380--6393
Recent research has revealed that neural language models at scale suffer from poor temporal generalization capability, i.e., language model pre-trained on static data from past years performs worse over time on emerging data. Existing methods mainly perform continual training to mitigate such a misalignment. While effective to some extent but is far from being addressed on both the language modeling and downstream tasks. In this paper, we empirically observe that temporal generalization is closely affiliated with lexical semantic change, which is one of the essential phenomena of natural languages. Based on this observation, we propose a simple yet effective lexical-level masking strategy to post-train a converged language model. Experiments on two pre-trained language models, two different classification tasks, and four benchmark datasets demonstrate the effectiveness of our proposed method over existing temporal adaptation methods, i.e., continual training with new data. Our code is available at https://github.com/zhaochen0110/LMLM.
null
null
10.18653/v1/2022.emnlp-main.428
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,558
inproceedings
feng-etal-2022-uln
{ULN}: Towards Underspecified Vision-and-Language Navigation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.429/
Feng, Weixi and Fu, Tsu-Jui and Lu, Yujie and Wang, William Yang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6394--6412
Vision-and-Language Navigation (VLN) is a task to guide an embodied agent moving to a target position using language instructions. Despite the significant performance improvement, the wide use of fine-grained instructions fails to characterize more practical linguistic variations in reality. To fill in this gap, we introduce a new setting, namely Underspecified vision-and-Language Navigation (ULN), and associated evaluation datasets. ULN evaluates agents using multi-level underspecified instructions instead of purely fine-grained or coarse-grained, which is a more realistic and general setting. As a primary step toward ULN, we propose a VLN framework that consists of a classification module, a navigation agent, and an Exploitation-to-Exploration (E2E) module. Specifically, we propose to learn Granularity Specific Sub-networks (GSS) for the agent to ground multi-level instructions with minimal additional parameters. Then, our E2E module estimates grounding uncertainty and conducts multi-step lookahead exploration to improve the success rate further. Experimental results show that existing VLN models are still brittle to multi-level language underspecification. Our framework is more robust and outperforms the baselines on ULN by {\textasciitilde}10{\%} relative success rate across all levels.
null
null
10.18653/v1/2022.emnlp-main.429
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,559
inproceedings
zhang-etal-2022-federated
Federated Model Decomposition with Private Vocabulary for Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.430/
Zhang, Zhuo and Hu, Xiangjing and Qu, Lizhen and Wang, Qifan and Xu, Zenglin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6413--6425
With the necessity of privacy protection, it becomes increasingly vital to train deep neural models in a federated learning manner for natural language processing (NLP) tasks. However, recent studies show eavesdroppers (i.e., dishonest servers) can still reconstruct the private input in federated learning (FL). Such a data reconstruction attack relies on the mappings between vocabulary and associated word embedding in NLP tasks, which are unfortunately less studied in current FL methods. In this paper, we propose a fedrated model decomposition method that protects the privacy of vocabularies, shorted as FEDEVOCAB. In FEDEVOCAB, each participant keeps the local embedding layer in the local device and detaches the local embedding parameters from federated aggregation. However, it is challenging to train an accurate NLP model when the private mappings are unknown and vary across participants in a cross-device FL setting. To address this problem, we further propose an adaptive updating technique to improve the performance of local models. Experimental results show that FEDEVOCAB maintains competitive performance and provides better privacy-preserving capacity compared to status quo methods.
null
null
10.18653/v1/2022.emnlp-main.430
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,560
inproceedings
xiong-etal-2022-reco
{R}e{C}o: Reliable Causal Chain Reasoning via Structural Causal Recurrent Neural Networks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.431/
Xiong, Kai and Ding, Xiao and Li, Zhongyang and Du, Li and Liu, Ting and Qin, Bing and Zheng, Yi and Huai, Baoxing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6426--6438
Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs. However, CCR suffers from two main transitive problems: threshold effect and scene drift. In other words, the causal pairs to be spliced may have a conflicting threshold boundary or scenario.To address these issues, we propose a novel Reliable Causal chain reasoning framework (ReCo), which introduces exogenous variables to represent the threshold and scene factors of each causal pair within the causal chain, and estimates the threshold and scene contradictions across exogenous variables via structural causal recurrent neural networks (SRNN). Experiments show that ReCo outperforms a series of strong baselines on both Chinese and English CCR datasets. Moreover, by injecting reliable causal chain knowledge distilled by ReCo, BERT can achieve better performances on four downstream causal-related tasks than BERT models enhanced by other kinds of knowledge.
null
null
10.18653/v1/2022.emnlp-main.431
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,561
inproceedings
zhong-etal-2022-video
Video Question Answering: Datasets, Algorithms and Challenges
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.432/
Zhong, Yaoyao and Ji, Wei and Xiao, Junbin and Li, Yicong and Deng, Weihong and Chua, Tat-Seng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6439--6455
This survey aims to sort out the recent advances in video question answering (VideoQA) and point towards future directions. We firstly categorize the datasets into 1) normal VideoQA, multi-modal VideoQA and knowledge-based VideoQA, according to the modalities invoked in the question-answer pairs, or 2) factoid VideoQA and inference VideoQA, according to the technical challenges in comprehending the questions and deriving the correct answers. We then summarize the VideoQA techniques, including those mainly designed for Factoid QA (e.g., the early spatio-temporal attention-based methods and the recently Transformer-based ones) and those targeted at explicit relation and logic inference (e.g., neural modular networks, neural symbolic methods, and graph-structured methods). Aside from the backbone techniques, we delve into the specific models and find out some common and useful insights either for video modeling, question answering, or for cross-modal correspondence learning. Finally, we point out the research trend of studying beyond factoid VideoQA to inference VideoQA, as well as towards the robustness and interpretability. Additionally, we maintain a repository, https://github.com/VRU-NExT/VideoQA, to keep trace of the latest VideoQA papers, datasets, and their open-source implementations if available. With these efforts, we strongly hope this survey could shed light on the follow-up VideoQA research.
null
null
10.18653/v1/2022.emnlp-main.432
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,562
inproceedings
cai-etal-2022-retrofitting
Retrofitting Multilingual Sentence Embeddings with {A}bstract {M}eaning {R}epresentation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.433/
Cai, Deng and Li, Xin and Ho, Jackie Chun-Sing and Bing, Lidong and Lam, Wai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6456--6472
We introduce a new method to improve existing multilingual sentence embeddings with Abstract Meaning Representation (AMR). Compared with the original textual input, AMR is a structured semantic representation that presents the core concepts and relations in a sentence explicitly and unambiguously. It also helps reduce the surface variations across different expressions and languages. Unlike most prior work that only evaluates the ability to measure semantic similarity, we present a thorough evaluation of existing multilingual sentence embeddings and our improved versions, which include a collection of five transfer tasks in different downstream applications. Experiment results show that retrofitting multilingual sentence embeddings with AMR leads to better state-of-the-art performance on both semantic textual similarity and transfer tasks.
null
null
10.18653/v1/2022.emnlp-main.433
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,563
inproceedings
wang-etal-2022-breaking
Breaking the Representation Bottleneck of {C}hinese Characters: Neural Machine Translation with Stroke Sequence Modeling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.434/
Wang, Zhijun and Liu, Xuebo and Zhang, Min
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6473--6484
Existing research generally treats Chinese character as a minimum unit for representation. However, such Chinese character representation will suffer two bottlenecks: 1) Learning bottleneck, the learning cannot benefit from its rich internal features (e.g., radicals and strokes); and 2) Parameter bottleneck, each individual character has to be represented by a unique vector. In this paper, we introduce a novel representation method for Chinese characters to break the bottlenecks, namely StrokeNet, which represents a Chinese character by a Latinized stroke sequence (e.g., {\textquotedblleft}凹 (concave){\textquotedblright} to {\textquotedblleft}ajaie{\textquotedblright} and {\textquotedblleft}凸 (convex){\textquotedblright} to {\textquotedblleft}aeaqe{\textquotedblright}). Specifically, StrokeNet maps each stroke to a specific Latin character, thus allowing similar Chinese characters to have similar Latin representations. With the introduction of StrokeNet to neural machine translation (NMT), many powerful but not applicable techniques to non-Latin languages (e.g., shared subword vocabulary learning and ciphertext-based data augmentation) can now be perfectly implemented. Experiments on the widely-used NIST Chinese-English, WMT17 Chinese-English and IWSLT17 Japanese-English NMT tasks show that StrokeNet can provide a significant performance boost over the strong baselines with fewer model parameters, achieving 26.5 BLEU on the WMT17 Chinese-English task which is better than any previously reported results without using monolingual data. Code and scripts are freely available at https://github.com/zjwang21/StrokeNet.
null
null
10.18653/v1/2022.emnlp-main.434
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,564
inproceedings
zhang-etal-2022-boundary
Boundary-Driven Table-Filling for Aspect Sentiment Triplet Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.435/
Zhang, Yice and Yang, Yifan and Li, Yihui and Liang, Bin and Chen, Shiwei and Dang, Yixue and Yang, Min and Xu, Ruifeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6485--6498
Aspect Sentiment Triplet Extraction (ASTE) aims to extract the aspect terms along with the corresponding opinion terms and the expressed sentiments in the review, which is an important task in sentiment analysis. Previous research efforts generally address the ASTE task in an end-to-end fashion through the table-filling formalization, in which the triplets are represented by a two-dimensional (2D) table of word-pair relations. Under this formalization, a term-level relation is decomposed into multiple independent word-level relations, which leads to relation inconsistency and boundary insensitivity in the face of multi-word aspect terms and opinion terms. To overcome these issues, we propose Boundary-Driven Table-Filling (BDTF), which represents each triplet as a relation region in the 2D table and transforms the ASTE task into detection and classification of relation regions. We also notice that the quality of the table representation greatly affects the performance of BDTF. Therefore, we develop an effective relation representation learning approach to learn the table representation, which can fully exploit both word-to-word interactions and relation-to-relation interactions. Experiments on several public benchmarks show that the proposed approach achieves state-of-the-art performances.
null
null
10.18653/v1/2022.emnlp-main.435
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,565
inproceedings
zhou-etal-2022-attention
Attention and Edge-Label Guided Graph Convolutional Networks for Named Entity Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.436/
Zhou, Renjie and Xie, Zhongyi and Wan, Jian and Zhang, Jilin and Liao, Yong and Liu, Qiang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6499--6510
It has been shown that named entity recognition (NER) could benefit from incorporating the long-distance structured information captured by dependency trees. However, dependency trees built by tools usually have a certain percentage of errors. Under such circumstances, how to better use relevant structured information while ignoring irrelevant or wrong structured information from the dependency trees to improve NER performance is still a challenging research problem. In this paper, we propose the Attention and Edge-Label guided Graph Convolution Network (AELGCN) model. Then, we integrate it into BiLSTM-CRF to form BiLSTM-AELGCN-CRF model. We design an edge-aware node joint update module and introduce a node-aware edge update module to explore hidden in structured information entirely and solve the wrong dependency label information to some extent. After two modules, we apply attention-guided GCN, which automatically learns how to attend to the relevant structured information selectively. We conduct extensive experiments on several standard datasets across four languages and achieve better results than previous approaches. Through experimental analysis, it is found that our proposed model can better exploit the structured information on the dependency tree to improve the recognition of long entities.
null
null
10.18653/v1/2022.emnlp-main.436
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,566
inproceedings
deng-etal-2022-title2event
{T}itle2{E}vent: Benchmarking Open Event Extraction with a Large-scale {C}hinese Title Dataset
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.437/
Deng, Haolin and Zhang, Yanan and Zhang, Yangfan and Ying, Wangyang and Yu, Changlong and Gao, Jun and Wang, Wei and Bai, Xiaoling and Yang, Nan and Ma, Jin and Chen, Xiang and Zhou, Tianhua
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6511--6524
Event extraction (EE) is crucial to downstream tasks such as new aggregation and event knowledge graph construction. Most existing EE datasets manually define fixed event types and design specific schema for each of them, failing to cover diverse events emerging from the online text. Moreover, news titles, an important source of event mentions, have not gained enough attention in current EE research. In this paper, we present Title2Event, a large-scale sentence-level dataset benchmarking Open Event Extraction without restricting event types. Title2Event contains more than 42,000 news titles in 34 topics collected from Chinese web pages. To the best of our knowledge, it is currently the largest manually annotated Chinese dataset for open event extraction. We further conduct experiments on Title2Event with different models and show that the characteristics of titles make it challenging for event extraction, addressing the significance of advanced study on this problem. The dataset and baseline codes are available at https://open-event-hub.github.io/title2event.
null
null
10.18653/v1/2022.emnlp-main.437
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,567
inproceedings
malaviya-etal-2022-cascading
Cascading Biases: Investigating the Effect of Heuristic Annotation Strategies on Data and Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.438/
Malaviya, Chaitanya and Bhatia, Sudeep and Yatskar, Mark
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6525--6540
Cognitive psychologists have documented that humans use cognitive heuristics, or mental shortcuts, to make quick decisions while expending less effort. While performing annotation work on crowdsourcing platforms, we hypothesize that such heuristic use among annotators cascades on to data quality and model robustness. In this work, we study cognitive heuristic use in the context of annotating multiple-choice reading comprehension datasets. We propose tracking annotator heuristic traces, where we tangibly measure low-effort annotation strategies that could indicate usage of various cognitive heuristics. We find evidence that annotators might be using multiple such heuristics, based on correlations with a battery of psychological tests. Importantly, heuristic use among annotators determines data quality along several dimensions: (1) known biased models, such as partial input models, more easily solve examples authoredby annotators that rate highly on heuristic use, (2) models trained on annotators scoring highly on heuristic use don`t generalize as well, and (3) heuristic-seeking annotators tend to create qualitatively less challenging examples. Our findings suggest that tracking heuristic usage among annotators can potentially help with collecting challenging datasets and diagnosing model biases.
null
null
10.18653/v1/2022.emnlp-main.438
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,568
inproceedings
trivedi-etal-2022-teaching
Teaching Broad Reasoning Skills for Multi-Step {QA} by Generating Hard Contexts
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.439/
Trivedi, Harsh and Balasubramanian, Niranjan and Khot, Tushar and Sabharwal, Ashish
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6541--6566
Question-answering datasets require a broad set of reasoning skills. We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion. Specifically, we use widely available QDMR representations to programmatically create hard-to-cheat synthetic contexts for real questions in six multi-step reasoning datasets. These contexts are carefully designed to avoid common reasoning shortcuts prevalent in real contexts that prevent models from learning the right skills. This results in a pretraining dataset, named TeaBReaC, containing 525K multi-step questions (with associated formal programs) covering about 900 reasoning patterns. We show that pretraining standard language models (LMs) on TeaBReaC before fine-tuning them on target datasets improves their performance by up to 13 F1 points across 4 multi-step QA datasets, with up to 21 point gain on more complex questions. The resulting models also demonstrate higher robustness, with a 5-8 F1 point improvement on two contrast sets. Furthermore, TeaBReaC pretraining substantially improves model performance and robustness even when starting with numerate LMs pretrained using recent methods (e.g., PReasM, POET). Our work thus shows how to effectively use decomposition-guided contexts to robustly teach multi-step reasoning.
null
null
10.18653/v1/2022.emnlp-main.439
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,569
inproceedings
yin-etal-2022-addmu
{ADDMU}: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.440/
Yin, Fan and Li, Yao and Hsieh, Cho-Jui and Chang, Kai-Wei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6567--6584
Adversarial Examples Detection (AED) is a crucial defense technique against adversarial attacks and has drawn increasing attention from the Natural Language Processing (NLP) community. Despite the surge of new AED methods, our studies show that existing methods heavily rely on a shortcut to achieve good performance. In other words, current search-based adversarial attacks in NLP stop once model predictions change, and thus most adversarial examples generated by those attacks are located near model decision boundaries. To surpass this shortcut and fairly evaluate AED methods, we propose to test AED methods with \textbf{F}ar \textbf{B}oundary (\textbf{FB}) adversarial examples. Existing methods show worse than random guess performance under this scenario. To overcome this limitation, we propose a new technique, \textbf{ADDMU}, \textbf{a}dversary \textbf{d}etection with \textbf{d}ata and \textbf{m}odel \textbf{u}ncertainty, which combines two types of uncertainty estimation for both regular and FB adversarial example detection. Our new method outperforms previous methods by 3.6 and 6.0 \textit{AUC} points under each scenario. Finally, our analysis shows that the two types of uncertainty provided by \textbf{ADDMU} can be leveraged to characterize adversarialexamples and identify the ones that contribute most to model`s robustness in adversarial training.
null
null
10.18653/v1/2022.emnlp-main.440
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,570
inproceedings
wan-etal-2022-g
{G}-{MAP}: General Memory-Augmented Pre-trained Language Model for Domain Tasks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.441/
Wan, Zhongwei and Yin, Yichun and Zhang, Wei and Shi, Jiaxin and Shang, Lifeng and Chen, Guangyong and Jiang, Xin and Liu, Qun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6585--6597
General pre-trained language models (PLMs), such as BERT, have achieved remarkable performance on various NLP tasks. Recently, domain-specific PLMs have been proposed to boost the task performance of specific domains (e.g., biomedical and computer science) by continuing to pre-train general PLMs with domain-specific corpora. However, this domain-adaptive pre-training (DAPT (CITATION)) tends to forget the previous general knowledge acquired by general PLMs, which leads to a \textit{catastrophic forgetting} phenomenon and sub-optimal performance. To alleviate this problem, we propose a new framework of \textbf{M}emory-\textbf{A}ugmented \textbf{P}re-trained Language Model (\textbf{MAP}), which augments the domain-specific PLM by a memory built from the frozen general PLM without losing the general knowledge. Specifically, we propose a new memory-augmented layer, and based on it, different augmentation strategies are explored to build memory and fusion memory into domain-specific PLM. We demonstrate the effectiveness of MAP on different domains (biomedical and computer science publications, news, and reviews) and different kinds (text classification, QA, NER) of tasks, and the extensive results show that the proposed MAP can achieve SOTA results on these tasks.
null
null
10.18653/v1/2022.emnlp-main.441
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,571
inproceedings
zheng-etal-2022-towards-unifying
Towards Unifying Reference Expression Generation and Comprehension
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.442/
Zheng, Duo and Kong, Tao and Jing, Ya and Wang, Jiaan and Wang, Xiaojie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6598--6611
Reference Expression Generation (REG) and Comprehension (REC) are two highly correlated tasks. Modeling REG and REC simultaneously for utilizing the relation between them is a promising way to improve both. However, the problem of distinct inputs, as well as building connections between them in a single model, brings challenges to the design and training of the joint model. To address the problems, we propose a unified model for REG and REC, named UniRef. It unifies these two tasks with the carefully-designed Image-Region-Text Fusion layer (IRTF), which fuses the image, region and text via the image cross-attention and region cross-attention. Additionally, IRTF could generate pseudo input regions for the REC task to enable a uniform way for sharing the identical representation space across the REC and REG. We further propose Vision-conditioned Masked Language Modeling (VMLM) and Text-Conditioned Region Prediction (TRP) to pre-train UniRef model on multi-granular corpora. The VMLM and TRP are directly related to REG and REC, respectively, but could help each other. We conduct extensive experiments on three benchmark datasets, RefCOCO, RefCOCO+ and RefCOCOg. Experimental results show that our model outperforms previous state-of-the-art methods on both REG and REC.
null
null
10.18653/v1/2022.emnlp-main.442
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,572
inproceedings
nguyen-minh-luu-2022-textual
Textual Manifold-based Defense Against Natural Language Adversarial Examples
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.443/
Nguyen Minh, Dang and Luu, Anh Tuan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6612--6625
Despite the recent success of large pretrained language models in NLP, they are susceptible to adversarial examples. Concurrently, several studies on adversarial images have observed an intriguing property: the adversarial images tend to leave the low-dimensional natural data manifold. In this study, we find a similar phenomenon occurs in the contextualized embedding space of natural sentences induced by pretrained language models in which textual adversarial examples tend to have their embeddings diverge off the manifold of natural sentence embeddings. Based on this finding, we propose Textual Manifold-based Defense (TMD), a defense mechanism that learns the embedding space manifold of the underlying language model and projects novel inputs back to the approximated structure before classification. Through extensive experiments, we find that our method consistently and significantly outperforms previous defenses under various attack settings while remaining unaffected to the clean accuracy. To the best of our knowledge, this is the first kind of manifold-based defense adapted to the NLP domain.
null
null
10.18653/v1/2022.emnlp-main.443
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,573
inproceedings
zhao-etal-2022-tiny
Tiny-Attention Adapter: Contexts Are More Important Than the Number of Parameters
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.444/
Zhao, Hongyu and Tan, Hao and Mei, Hongyuan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6626--6638
Adapter-tuning is a paradigm that transfers a pretrained language model to downstream tasks by adding and tuning a small number of new parameters. Previously proposed adapter architectures are all feed-forward neural networks. In this paper, we investigate the effectiveness of using tiny-attention{---}i.e., attention with extremely small per-head dimensionality{---}as adapters. Our tiny-attention adapter learns to modify the hidden states at each position directly conditioned on the hidden states at all the other positions, which is missed by the previously proposed adapters. Moreover, we view its multiple attention heads as a mixture of experts and propose to average their weights during deployment, which further reduces its inference computation cost. On the GLUE benchmark, our tiny-attention adapter outperforms the other parameter-efficient transfer learning methods as well as full fine-tuning while only updating 0.05{\%} of the parameters. On the FewGLUE benchmark, its performance is comparable to that of GPT-3 and PET.
null
null
10.18653/v1/2022.emnlp-main.444
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,574
inproceedings
sun-etal-2022-reduce
Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.445/
Sun, Si and Xiong, Chenyan and Yu, Yue and Overwijk, Arnold and Liu, Zhiyuan and Bao, Jie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6639--6654
In this paper, we investigate the instability in the standard dense retrieval training, which iterates between model training and hard negative selection using the being-trained model. We show the catastrophic forgetting phenomena behind the training instability, where models learn and forget different negative groups during training iterations. We then propose ANCE-Tele, which accumulates momentum negatives from past iterations and approximates future iterations using lookahead negatives, as {\textquotedblleft}teleportations{\textquotedblright} along the time axis to smooth the learning process. On web search and OpenQA, ANCE-Tele outperforms previous state-of-the-art systems of similar size, eliminates the dependency on sparse retrieval negatives, and is competitive among systems using significantly more (50x) parameters. Our analysis demonstrates that teleportation negatives reduce catastrophic forgetting and improve convergence speed for dense retrieval training. The source code of this paper is available at https://github.com/OpenMatch/ANCE-Tele.
null
null
10.18653/v1/2022.emnlp-main.445
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,575
inproceedings
asai-etal-2022-attempt
{ATTEMPT}: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.446/
Asai, Akari and Salehi, Mohammadreza and Peters, Matthew and Hajishirzi, Hannaneh
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6655--6672
This work introduces a new multi-task, parameter-efficient language model (LM) tuning method that learns to transfer knowledge across different tasks via a mixture of soft prompts{---}small prefix embedding vectors pre-trained for different tasks. Our method, called ATTEMPT (ATTEntional Mixtures of Prompt Tuning), obtains source prompts as encodings of large-scale source tasks into a small number of parameters and trains an attention module to interpolate the source prompts and a newly initialized target prompt for every instance in the target task. During training, only the target task prompt and the attention weights, which are shared between tasks in multi-task training, are updated, while the original LM and source prompts are intact. ATTEMPT is highly parameter-efficient (e.g., updates 2,300 times fewer parameters than full fine-tuning), while it overcomes instability of prompt tuning and achieves high task performance using learned knowledge from high-resource tasks. Moreover, it is modular using pre-trained soft prompts, and can flexibly add or remove source prompts for effective knowledge transfer. Our experimental results across 21 diverse NLP datasets show that ATTEMPT significantly outperforms prompt tuning and outperforms or matches fully fine-tuned or other parameter-efficient tuning approaches that use 10 times more parameters. Finally, ATTEMPT outperforms previous work in few-shot learning settings.
null
null
10.18653/v1/2022.emnlp-main.446
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,576
inproceedings
rabinovich-carmeli-2022-exploration
Exploration of the Usage of Color Terms by Color-blind Participants in Online Discussion Platforms
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.447/
Rabinovich, Ella and Carmeli, Boaz
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6673--6685
Prominent questions about the role of sensory vs. linguistic input in the way we acquire and use language have been extensively studied in the psycholinguistic literature. However, the relative effect of various factors in a person`s overall experience on their linguistic system remains unclear. We study this question by making a step forward towards a better understanding of the conceptual perception of colors by color-blind individuals, as reflected in their spontaneous linguistic productions. Using a novel and carefully curated dataset, we show that red-green color-blind speakers use the {\textquotedblleft}red{\textquotedblright} and {\textquotedblleft}green{\textquotedblright} color terms in less predictable contexts, and in linguistic environments evoking mental image to a lower extent, when compared to their normal-sighted counterparts. These findings shed some new and interesting light on the role of sensory experience on our linguistic system.
null
null
10.18653/v1/2022.emnlp-main.447
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,577
inproceedings
huang-etal-2022-deer
{DEER}: Descriptive Knowledge Graph for Explaining Entity Relationships
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.448/
Huang, Jie and Zhu, Kerui and Chang, Kevin Chen-Chuan and Xiong, Jinjun and Hwu, Wen-mei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6686--6698
We propose DEER (Descriptive Knowledge Graph for Explaining Entity Relationships) - an open and informative form of modeling entity relationships. In DEER, relationships between entities are represented by free-text relation descriptions. For instance, the relationship between entities of machine learning and algorithm can be represented as {\textquotedblleft}Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.{\textquotedblright} To construct DEER, we propose a self-supervised learning method to extract relation descriptions with the analysis of dependency patterns and generate relation descriptions with a transformer-based relation description synthesizing model, where no human labeling is required. Experiments demonstrate that our system can extract and generate high-quality relation descriptions for explaining entity relationships. The results suggest that we can build an open and informative knowledge graph without human annotation.
null
null
10.18653/v1/2022.emnlp-main.448
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,578
inproceedings
sun-etal-2022-meta
{META}-{GUI}: Towards Multi-modal Conversational Agents on Mobile {GUI}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.449/
Sun, Liangtai and Chen, Xingyu and Chen, Lu and Dai, Tianle and Zhu, Zichen and Yu, Kai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6699--6712
Task-oriented dialogue (TOD) systems have been widely used by mobile phone intelligent assistants to accomplish tasks such as calendar scheduling or hotel reservation. Current TOD systems usually focus on multi-turn text/speech interaction, then they would call back-end APIs designed for TODs to perform the task. However, this API-based architecture greatly limits the information-searching capability of intelligent assistants and may even lead to task failure if TOD-specific APIs are not available or the task is too complicated to be executed by the provided APIs. In this paper, we propose a new TOD architecture: GUI-based task-oriented dialogue system (GUI-TOD). A GUI-TOD system can directly perform GUI operations on real APPs and execute tasks without invoking TOD-specific backend APIs. Furthermore, we release META-GUI, a dataset for training a Multi-modal convErsaTional Agent on mobile GUI. We also propose a multi-model action prediction and response model, which show promising results on META-GUI. The dataset, codes and leaderboard are publicly available.
null
null
10.18653/v1/2022.emnlp-main.449
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,579
inproceedings
kim-etal-2022-understanding
Understanding and Improving Knowledge Distillation for Quantization Aware Training of Large Transformer Encoders
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.450/
Kim, Minsoo and Lee, Sihwa and Hong, Suk-Jin and Chang, Du-Seong and Choi, Jungwook
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6713--6725
Knowledge distillation (KD) has been a ubiquitous method for model compression to strengthen the capability of a lightweight model with the transferred knowledge from the teacher. In particular, KD has been employed in quantization-aware training (QAT) of Transformer encoders like BERT to improve the accuracy of the student model with the reduced-precision weight parameters. However, little is understood about which of the various KD approaches best fits the QAT of Transformers. In this work, we provide an in-depth analysis of the mechanism of KD on attention recovery of quantized large Transformers. In particular, we reveal that the previously adopted MSE loss on the attention score is insufficient for recovering the self-attention information. Therefore, we propose two KD methods; attention-map and attention-output losses. Furthermore, we explore the unification of both losses to address task-dependent preference between attention-map and output losses. The experimental results on various Transformer encoder models demonstrate that the proposed KD methods achieve state-of-the-art accuracy for QAT with sub-2-bit weight quantization.
null
null
10.18653/v1/2022.emnlp-main.450
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,580
inproceedings
qin-etal-2022-exploring
Exploring Mode Connectivity for Pre-trained Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.451/
Qin, Yujia and Qian, Cheng and Yi, Jing and Chen, Weize and Lin, Yankai and Han, Xu and Liu, Zhiyuan and Sun, Maosong and Zhou, Jie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6726--6746
Recent years have witnessed the prevalent application of pre-trained language models (PLMs) in NLP. From the perspective of parameter space, PLMs provide generic initialization, starting from which high-performance minima could be found. Although plenty of works have studied how to effectively and efficiently adapt PLMs to high-performance minima, little is known about the connection of various minima reached under different adaptation configurations. In this paper, we investigate the geometric connections of different minima through the lens of mode connectivity, which measures whether two minima can be connected with a low-loss path. We conduct empirical analyses to investigate three questions: (1) how could hyperparameters, specific tuning methods, and training data affect PLM`s mode connectivity? (2) How does mode connectivity change during pre-training? (3) How does the PLM`s task knowledge change along the path connecting two minima? In general, exploring the mode connectivity of PLMs conduces to understanding the geometric connection of different minima, which may help us fathom the inner workings of PLM downstream adaptation. The codes are publicly available at https://github.com/thunlp/Mode-Connectivity-PLM.
null
null
10.18653/v1/2022.emnlp-main.451
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,581
inproceedings
oh-etal-2022-synergy
Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.452/
Oh, Jaehoon and Ko, Jongwoo and Yun, Se-Young
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6747--6754
Translation has played a crucial role in improving the performance on multilingual tasks: (1) to generate the target language data from the source language data for training and (2) to generate the source language data from the target language data for inference. However, prior works have not considered the use of both translations simultaneously. This paper shows that combining them can synergize the results on various multilingual sentence classification tasks. We empirically find that translation artifacts stylized by translators are the main factor of the performance gain. Based on this analysis, we adopt two training methods, SupCon and MixUp, considering translation artifacts. Furthermore, we propose a cross-lingual fine-tuning algorithm called MUSC, which uses SupCon and MixUp jointly and improves the performance. Our code is available at https://github.com/jongwooko/MUSC.
null
null
10.18653/v1/2022.emnlp-main.452
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,582
inproceedings
ji-etal-2022-increasing
Increasing Visual Awareness in Multimodal Neural Machine Translation from an Information Theoretic Perspective
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.453/
Ji, Baijun and Zhang, Tong and Zou, Yicheng and Hu, Bojie and Shen, Si
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6755--6764
Multimodal machine translation (MMT) aims to improve translation quality by equipping the source sentence with its corresponding image. Despite the promising performance, MMT models still suffer the problem of input degradation: models focus more on textual information while visual information is generally overlooked. In this paper, we endeavor to improve MMT performance by increasing visual awareness from an information theoretic perspective. In detail, we decompose the informative visual signals into two parts: source-specific information and target-specific information. We use mutual information to quantify them and propose two methods for objective optimization to better leverage visual signals. Experiments on two datasets demonstrate that our approach can effectively enhance the visual awareness of MMT model and achieve superior results against strong baselines.
null
null
10.18653/v1/2022.emnlp-main.453
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,583
inproceedings
xu-etal-2022-improving
Improving Event Coreference Resolution Using Document-level and Topic-level Information
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.454/
Xu, Sheng and Li, Peifeng and Zhu, Qiaoming
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6765--6775
Event coreference resolution (ECR) aims to cluster event mentions that refer to the same real-world events. Deep learning methods have achieved SOTA results on the ECR task. However, due to the encoding length limitation, previous methods either adopt classical pairwise models based on sentence-level context or split each document into multiple chunks and encode them separately. They failed to capture the interactions and contextual cues among those long-distance event mentions. Besides, high-level information, such as event topics, is rarely considered to enhance representation learning for ECR. To address the above two issues, we first apply a Longformer-based encoder to obtain the document-level embeddings and an encoder with a trigger-mask mechanism to learn sentence-level embeddings based on local context. In addition, we propose an event topic generator to infer the latent topic-level representations. Finally, using the above event embeddings, we employ a multiple tensor matching method to capture their interactions at the document, sentence, and topic levels. Experimental results on the KBP 2017 dataset show that our model outperforms the SOTA baselines.
null
null
10.18653/v1/2022.emnlp-main.454
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,584
inproceedings
bhardwaj-etal-2022-vector
Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.455/
Bhardwaj, Rishabh and Saha, Amrita and Hoi, Steven C.H. and Poria, Soujanya
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6776--6791
Prompt Tuning has been largely successful as a parameter-efficient method of conditioning large-scale pre-trained language models to perform downstream tasks. Thus far, soft prompt tuning learns a fixed set of task-specific continuous vectors, i.e., soft tokens that remain static across the task samples. A fixed prompt, however, may not generalize well to the diverse kinds of inputs the task comprises. In order to address this, we propose Vector-quantized Input-contextualized Prompts (VIP) as an extension to the soft prompt tuning framework. VIP particularly focuses on two aspects{---}contextual prompts that learns input-specific contextualization of the soft prompt tokens through a small-scale sentence encoder and quantized prompts that maps the contextualized prompts to a set of learnable codebook vectors through a Vector quantization network. On various language understanding tasks like SuperGLUE, QA, Relation classification, NER and NLI, VIP outperforms the soft prompt tuning (PT) baseline by an average margin of 1.19{\%}. Further, our generalization studies show that VIP learns more robust prompt representations, surpassing PT by a margin of 0.6{\%} - 5.3{\%} on Out-of-domain QA and NLI tasks respectively, and by 0.75{\%} on Multi-Task setup over 4 tasks spanning across 12 domains.
null
null
10.18653/v1/2022.emnlp-main.455
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,585
inproceedings
deb-etal-2022-boosting
Boosting Natural Language Generation from Instructions with Meta-Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.456/
Deb, Budhaditya and Awadallah, Ahmed Hassan and Zheng, Guoqing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
6792--6808
Recent work has shown that language models (LMs) trained with multi-task \textit{instructional learning} (MTIL) can solve diverse NLP tasks in zero- and few-shot settings with improved performance compared to prompt tuning. MTIL illustrates that LMs can extract and use information about the task from instructions beyond the surface patterns of the inputs and outputs. This suggests that meta-learning may further enhance the utilization of instructions for effective task transfer. In this paper we investigate whether meta-learning applied to MTIL can further improve generalization to unseen tasks in a zero-shot setting. Specifically, we propose to adapt meta-learning to MTIL in three directions: 1) Model Agnostic Meta Learning (MAML), 2) Hyper-Network (HNet) based adaptation to generate task specific parameters conditioned on instructions, and 3) an approach combining HNet and MAML. Through extensive experiments on the large scale Natural Instructions V2 dataset, we show that our proposed approaches significantly improve over strong baselines in zero-shot settings. In particular, meta-learning improves the effectiveness of instructions and is most impactful when the test tasks are strictly zero-shot (i.e. no similar tasks in the training set) and are {\textquotedblleft}hard{\textquotedblright} for LMs, illustrating the potential of meta-learning for MTIL for out-of-distribution tasks.
null
null
10.18653/v1/2022.emnlp-main.456
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,586