entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
zhang-etal-2022-rochbert
{R}o{C}h{B}ert: Towards Robust {BERT} Fine-tuning for {C}hinese
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.256/
Zhang, Zihan and Li, Jinfeng and Shi, Ning and Yuan, Bo and Liu, Xiangyu and Zhang, Rong and Xue, Hui and Sun, Donghong and Zhang, Chao
Findings of the Association for Computational Linguistics: EMNLP 2022
3502--3516
Despite of the superb performance on a wide range of tasks, pre-trained language models (e.g., BERT) have been proved vulnerable to adversarial texts. In this paper, we present RoChBERT, a framework to build more Robust BERT-based models by utilizing a more comprehensive adversarial graph to fuse Chinese phonetic and glyph features into pre-trained representations during fine-tuning. Inspired by curriculum learning, we further propose to augment the training dataset with adversarial texts in combination with intermediate samples. Extensive experiments demonstrate that RoChBERT outperforms previous methods in significant ways: (i) robust {--} RoChBERT greatly improves the model robustness without sacrificing accuracy on benign texts. Specifically, the defense lowers the success rates of unlimited and limited attacks by 59.43{\%} and 39.33{\%} respectively, while remaining accuracy of 93.30{\%}; (ii) flexible {--} RoChBERT can easily extend to various language models to solve different downstream tasks with excellent performance; and (iii) efficient {--} RoChBERT can be directly applied to the fine-tuning stage without pre-training language model from scratch, and the proposed data augmentation method is also low-cost.
null
null
10.18653/v1/2022.findings-emnlp.256
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,768
inproceedings
sato-etal-2022-lexical
Lexical Entailment with Hierarchy Representations by Deep Metric Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.257/
Sato, Naomi and Isonuma, Masaru and Asatani, Kimitaka and Ishizuka, Shoya and Shimizu, Aori and Sakata, Ichiro
Findings of the Association for Computational Linguistics: EMNLP 2022
3517--3522
In this paper, we introduce a novel method for lexical entailment tasks, which detects a hyponym-hypernym relation among words. Existing lexical entailment studies are lacking in generalization performance, as they cannot be applied to words that are not included in the training dataset. Moreover, existing work evaluates the performance by using the dataset that contains words used for training. This study proposes a method that learns a mapping from word embeddings to the hierarchical embeddings in order to predict the hypernymy relations of any input words. To validate the generalization performance, we conduct experiments using a train dataset that does not overlap with the evaluation dataset. As a result, our method achieved state-of-the-art performance and showed robustness for unknown words.
null
null
10.18653/v1/2022.findings-emnlp.257
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,769
inproceedings
guo-etal-2022-improving
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.258/
Guo, Xu and Li, Boyang and Yu, Han
Findings of the Association for Computational Linguistics: EMNLP 2022
3523--3537
Prompt tuning, or the conditioning of a frozen pretrained language model (PLM) with soft prompts learned from data, has demonstrated impressive performance on a wide range of NLP tasks. However, prompt tuning requires a large training dataset to be effective and is outperformed by finetuning the entire PLM in data-scarce regimes. Previous work (Gu et al., 2022, Vu et al., 2022) proposed to transfer soft prompts pretrained on the source domain to the target domain. In this paper, we explore domain adaptation for prompt tuning, a problem setting where unlabeled data from the target domain are available during pretraining. We propose bOosting Prompt TunIng with doMain Adaptation (OPTIMA), which regularizes the decision boundary to be smooth around regions where source and target data distributions are similar. Extensive experiments demonstrate that OPTIMA significantly enhances the transferability and sample-efficiency of prompt tuning compared to strong baselines. Moreover, in few-shot settings, OPTIMA exceeds full-model tuning by a large margin.
null
null
10.18653/v1/2022.findings-emnlp.258
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,770
inproceedings
cohen-etal-2022-mcphrasy
{M}c{P}hra{S}y: Multi-Context Phrase Similarity and Clustering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.259/
Cohen, Amir and Gonen, Hila and Shapira, Ori and Levy, Ran and Goldberg, Yoav
Findings of the Association for Computational Linguistics: EMNLP 2022
3538--3550
Phrase similarity is a key component of many NLP applications. Current phrase similarity methods focus on embedding the phrase itself and use the phrase context only during training of the pretrained model. To better leverage the information in the context, we propose McPhraSy (Multi-context Phrase Similarity), a novel algorithm for estimating the similarity of phrases based on multiple contexts. At inference time, McPhraSy represents each phrase by considering multiple contexts in which it appears and computes the similarity of two phrases by aggregating the pairwise similarities between the contexts of the phrases. Incorporating context during inference enables McPhraSy to outperform current state-of-the-art models on two phrase similarity datasets by up to 13.3{\%}. Finally, we also present a new downstream task that relies on phrase similarity {--} keyphrase clustering {--} and create a new benchmark for it in the product reviews domain. We show that McPhraSy surpasses all other baselines for this task.
null
null
10.18653/v1/2022.findings-emnlp.259
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,771
inproceedings
anantharama-etal-2022-canarex
{CAN}ar{E}x: Contextually Aware Narrative Extraction for Semantically Rich Text-as-data Applications
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.260/
Anantharama, Nandini and Angus, Simon and O{'}Neill, Lachlan
Findings of the Association for Computational Linguistics: EMNLP 2022
3551--3564
Narrative modelling is an area of active research, motivated by the acknowledgement of narratives as drivers of societal decision making. These research efforts conceptualize narratives as connected entity chains, and modeling typically focuses on the identification of entities and their connections within a text. An emerging approach to narrative modelling is the use of semantic role labeling (SRL) to extract Entity-Verb-Entity (E-V-Es) tuples from a text, followed by dimensionality reduction to reduce the space of entities and connections separately. This process penalises the semantic richness of narratives and discards much contextual information along the way. Here, we propose an alternate narrative extraction approach - CANarEx, incorporating a pipeline of common contextual constructs through co-reference resolution, micro-narrative generation and clustering of these narratives through sentence embeddings. We evaluate our approach through testing the recovery of {\textquotedblleft}narrative time-series clusters{\textquotedblright}, mimicking a desirable text-as-data task. The evaluation framework leverages synthetic data generated using a GPT-3 model. The GPT-3 model is trained to generate similar sentences using a large dataset of news articles. The synthetic data maps to three topics in the news dataset. We then generate narrative time-series document cluster representations by mapping the synthetic data to three distinct signals synthetically injected into the testing corpus. Evaluation results demonstrate the superior ability of CANarEx to recover narrative time-series through reduced MSE and improved precision/recall relative to existing methods. The validity is further reinforced through ablation studies and qualitative analysis.
null
null
10.18653/v1/2022.findings-emnlp.260
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,772
inproceedings
xu-etal-2022-narrate
Narrate Dialogues for Better Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.261/
Xu, Ruochen and Zhu, Chenguang and Zeng, Michael
Findings of the Association for Computational Linguistics: EMNLP 2022
3565--3575
Dialogue summarization models aim to generate a concise and accurate summary for multi-party dialogue. The complexity of dialogue, including coreference, dialogue acts, and inter-speaker interactions bring unique challenges to dialogue summarization. Most recent neural models achieve state-of-art performance following the pretrain-then-finetune recipe, where the large-scale language model (LLM) is pretrained on large-scale single-speaker written text, but later finetuned on multi-speaker dialogue text. To mitigate the gap between pretraining and finetuning, we propose several approaches to convert the dialogue into a third-person narrative style and show that the narration serves as a valuable annotation for LLMs. Empirical results on three benchmark datasets show our simple approach achieves higher scores on the ROUGE and a factual correctness metric.
null
null
10.18653/v1/2022.findings-emnlp.261
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,773
inproceedings
zhou-etal-2022-towards-identifying
Towards Identifying Social Bias in Dialog Systems: Framework, Dataset, and Benchmark
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.262/
Zhou, Jingyan and Deng, Jiawen and Mi, Fei and Li, Yitong and Wang, Yasheng and Huang, Minlie and Jiang, Xin and Liu, Qun and Meng, Helen
Findings of the Association for Computational Linguistics: EMNLP 2022
3576--3591
Among all the safety concerns that hinder the deployment of open-domain dialog systems (e.g., offensive languages, biases, and toxic behaviors), social bias presents an insidious challenge. Addressing this challenge requires rigorous analyses and normative reasoning. In this paper, we focus our investigation on social bias measurement to facilitate the development of unbiased dialog systems. We first propose a novel Dial-Bias Framework for analyzing the social bias in conversations using a holistic method beyond bias lexicons or dichotomous annotations. Leveraging the proposed framework, we further introduce the CDial-Bias Dataset which is, to the best of our knowledge, the first annotated Chinese social bias dialog dataset. We also establish a fine-grained dialog bias measurement benchmark and conduct in-depth ablation studies to shed light on the utility of the detailed annotations in the proposed dataset. Finally, we evaluate representative Chinese generative models with our classifiers to unveil the presence of social bias in these systems.
null
null
10.18653/v1/2022.findings-emnlp.262
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,774
inproceedings
bassignana-plank-2022-crossre
{C}ross{RE}: A Cross-Domain Dataset for Relation Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.263/
Bassignana, Elisa and Plank, Barbara
Findings of the Association for Computational Linguistics: EMNLP 2022
3592--3604
Relation Extraction (RE) has attracted increasing attention, but current RE evaluation is limited to in-domain evaluation setups. Little is known on how well a RE system fares in challenging, but realistic out-of-distribution evaluation setups. To address this gap, we propose CrossRE, a new, freely-available cross-domain benchmark for RE, which comprises six distinct text domains and includes multi-label annotations. An additional innovation is that we release meta-data collected during annotation, to include explanations and flags of difficult instances. We provide an empirical evaluation with a state-of-the-art model for relation classification. As the meta-data enables us to shed new light on the state-of-the-art model, we provide a comprehensive analysis on the impact of difficult cases and find correlations between model and human annotations. Overall, our empirical investigation highlights the difficulty of cross-domain RE. We release our dataset, to spur more research in this direction.
null
null
10.18653/v1/2022.findings-emnlp.263
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,775
inproceedings
sun-etal-2022-probing
Probing Structural Knowledge from Pre-trained Language Model for Argumentation Relation Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.264/
Sun, Yang and Liang, Bin and Bao, Jianzhu and Yang, Min and Xu, Ruifeng
Findings of the Association for Computational Linguistics: EMNLP 2022
3605--3615
Extracting fine-grained structural information between argumentation component (AC) pairs is essential for argumentation relation classification (ARC). However, most previous studies attempt to model the relationship between AC pairs using AC level similarity or semantically relevant features. They ignore the complex interaction between AC pairs and cannot effectively reason the argumentation relation deeply.Therefore, in this paper, we propose a novel dual prior graph neural network (DPGNN) to jointly explore the probing knowledge derived from pre-trained language models (PLMs) and the syntactical information for comprehensively modeling the relationship between AC pairs. Specifically, we construct a probing graph by using probing knowledge derived from PLMs to recognize and align the relational information within and across the argumentation components. In addition, we propose a mutual dependency graph for the AC pair to reason the fine-grained syntactic structural information, in which the syntactical correlation between words is set by the dependency information within AC and mutual attention mechanism across ACs. The knowledge learned from the probing graph and the dependency graph are combined to comprehensively capture the aligned relationships of AC pairs for improving the results of ARC. Experimental results on three public datasets show that DPGNN outperforms the state-of-the-art baselines by a noticeable margin.
null
null
10.18653/v1/2022.findings-emnlp.264
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,776
inproceedings
xiu-etal-2022-logicnmr
{L}ogic{NMR}: Probing the Non-monotonic Reasoning Ability of Pre-trained Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.265/
Xiu, Yeliang and Xiao, Zhanhao and Liu, Yongmei
Findings of the Association for Computational Linguistics: EMNLP 2022
3616--3626
The logical reasoning capabilities of pre-trained language models have recently received much attention. As one of the vital reasoning paradigms, non-monotonic reasoning refers to the fact that conclusions may be invalidated with new information. Existing work has constructed a non-monotonic inference dataset $\delta$-NLI and explored the performance of language models on it. However, the $\delta$-NLI dataset is entangled with commonsense reasoning. In this paper, we explore the pure non-monotonic reasoning ability of pre-trained language models. We build a non-monotonic reasoning benchmark, named LogicNMR, with explicit default rules and iterative updates. In the experimental part, the performance of popular language models on LogicNMR is explored from the perspectives of accuracy, generalization, proof-based traceability and robustness. The experimental results show that even though the fine-tuned language models achieve an accuracy of more than 94.4{\%} on LogicNMR, they perform unsatisfactorily, with a significant drop, in generalization and proof-based traceability.
null
null
10.18653/v1/2022.findings-emnlp.265
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,777
inproceedings
he-etal-2022-cheaters
Cheater`s Bowl: Human vs. Computer Search Strategies for Open-Domain {QA}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.266/
He, Wanrong and Mao, Andrew and Boyd-Graber, Jordan
Findings of the Association for Computational Linguistics: EMNLP 2022
3627--3639
For humans and computers, the first step in answering an open-domain question is retrieving a set of relevant documents from a large corpus. However, the strategies that computers use fundamentally differ from those of humans. To better understand these differences, we design a gamified interface for data collection{---}Cheater`s Bowl{---}where a human answers complex questions with access to both traditional and modern search tools. We collect a dataset of human search sessions, analyze human search strategies, and compare them to state-of-the-art multi-hop QA models. Humans query logically, apply dynamic search chains, and use world knowledge to boost searching. We demonstrate how human queries can improve the accuracy of existing systems and propose improving the future design of QA models.
null
null
10.18653/v1/2022.findings-emnlp.266
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,778
inproceedings
wu-etal-2022-frsum
{FRSUM}: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.267/
Wu, Wenhao and Li, Wei and Liu, Jiachen and Xiao, Xinyan and Cao, Ziqiang and Li, Sujian and Wu, Hua
Findings of the Association for Computational Linguistics: EMNLP 2022
3640--3654
Despite being able to generate fluent and grammatical text, current Seq2Seq summarization models still suffering from the unfaithful generation problem.In this paper, we study the faithfulness of existing systems from a new perspective of factual robustness which is the ability to correctly generate factual information over adversarial unfaithful information.We first measure a model`sfactual robustness by its success rate to defend against adversarial attacks when generating factual information.The factual robustness analysis on a wide range of current systems shows its good consistency with human judgments on faithfulness.Inspired by these findings, we propose to improve the faithfulness of a model by enhancing its factual robustness.Specifically, we propose a novel training strategy, namely FRSUM, which teaches the model to defend against both explicit adversarial samples and implicit factual adversarial perturbations.Extensive automatic and human evaluation results show that FRSUM consistently improves the faithfulness of various Seq2Seq models, such as T5, BART.
null
null
10.18653/v1/2022.findings-emnlp.267
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,779
inproceedings
ormazabal-etal-2022-poelm
{P}oe{LM}: A Meter- and Rhyme-Controllable Language Model for Unsupervised Poetry Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.268/
Ormazabal, Aitor and Artetxe, Mikel and Agirrezabal, Manex and Soroa, Aitor and Agirre, Eneko
Findings of the Association for Computational Linguistics: EMNLP 2022
3655--3670
Formal verse poetry imposes strict constraints on the meter and rhyme scheme of poems. Most prior work on generating this type of poetry uses existing poems for supervision, which are difficult to obtain for most languages and poetic forms. In this work, we propose an unsupervised approach to generate poems that follow any given meter and rhyme scheme, without requiring any poetic text for training. Our method works by splitting a regular, non-poetic corpus into phrases, prepending control codes that describe the length and end rhyme of each phrase, and training a transformer language model in the augmented corpus. The transformer learns to link the structure descriptor with the control codes to the number of lines, their length and their end rhyme. During inference, we build control codes for the desired meter and rhyme scheme, and condition our language model on them to generate formal verse poetry. Experiments in Spanish and Basque show that our approach is able to generate valid poems, which are often comparable in quality to those written by humans.
null
null
10.18653/v1/2022.findings-emnlp.268
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,780
inproceedings
ye-etal-2022-progen
{P}ro{G}en: Progressive Zero-shot Dataset Generation via In-context Feedback
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.269/
Ye, Jiacheng and Gao, Jiahui and Wu, Zhiyong and Feng, Jiangtao and Yu, Tao and Kong, Lingpeng
Findings of the Association for Computational Linguistics: EMNLP 2022
3671--3683
Recently, dataset-generation-based zero-shot learning has shown promising results by training a task-specific model with a dataset synthesized from large pre-trained language models (PLMs). The final task-specific model often achieves compatible or even better performance than PLMs under the zero-shot setting, with orders of magnitude fewer parameters.However, synthetic datasets have their drawbacks. They have long being suffering from the low-quality issue (e.g., low informativeness, redundancy). This explains why the massive synthetic data does not lead to better performance {--} a scenario we would expect in the human-labeled data. To improve the quality in dataset synthesis, we propose a progressive zero-shot dataset generation framework, ProGen, which leverages the feedback from the task-specific model to guide the generation of new training data via in-context examples.Extensive experiments on five text classification datasets demonstrate the effectiveness of the proposed approach. We also show ProGen achieves on-par or superior performance with only 1{\%} synthetic dataset size, when comparing to baseline methods without in-context feedback.
null
null
10.18653/v1/2022.findings-emnlp.269
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,781
inproceedings
zhang-etal-2022-constructing
Constructing Highly Inductive Contexts for Dialogue Safety through Controllable Reverse Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.270/
Zhang, Zhexin and Cheng, Jiale and Sun, Hao and Deng, Jiawen and Mi, Fei and Wang, Yasheng and Shang, Lifeng and Huang, Minlie
Findings of the Association for Computational Linguistics: EMNLP 2022
3684--3697
Large pretrained language models can easily produce toxic or biased content, which is prohibitive for practical use. In order to detect such toxic generations, existing methods rely on templates, real-world data extraction, crowdsourcing workers or automatic generation to construct adversarial contexts that are likely to induce toxic generations. However, what type of context is more likely to induce unsafe responses is still under-explored. In this paper, we identify that context toxicity and context category (e.g., profanity, insult, drugs, etc.) are two important factors to cause safety issues in response generation. Hence, we propose a method called reverse generation to construct adversarial contexts conditioned on a given response, with the flexibility to control category, toxicity level and inductivity of the generated contexts. Via reverse generation, we augment the existing BAD dataset and construct a new dataset BAD+ which contains more than 120K diverse and highly inductive contexts in 12 categories. We test three popular pretrained dialogue models (Blender, DialoGPT and Plato2) and find that BAD+ can largely expose their safety problems. Furthermore, we show that BAD+ can greatly enhance the safety of generation, and we reveal the key factors of safety improvement. Our code and dataset is available at \url{https://github.com/thu-coai/Reverse_Generation}.
null
null
10.18653/v1/2022.findings-emnlp.270
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,782
inproceedings
si-etal-2022-language
Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in {VQA}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.271/
Si, Qingyi and Meng, Fandong and Zheng, Mingyu and Lin, Zheng and Liu, Yuanxin and Fu, Peng and Cao, Yanan and Wang, Weiping and Zhou, Jie
Findings of the Association for Computational Linguistics: EMNLP 2022
3698--3712
Visual Question Answering (VQA) models are prone to learn the shortcut solution formed by dataset biases rather than the intended solution. To evaluate the VQA models' reasoning ability beyond shortcut learning, the VQA-CP v2 dataset introduces a distribution shift between the training and test set given a question type. In this way, the model cannot use the training set shortcut (from question type to answer) to perform well on the test set. However, VQA-CP v2 only considers one type of shortcut and thus still cannot guarantee that the model relies on the intended solution rather than a solution specific to this shortcut. To overcome this limitation, we propose a new dataset that considers varying types of shortcuts by constructing different distribution shifts in multiple OOD test sets. In addition, we overcome the three troubling practices in the use of VQA-CP v2, e.g., selecting models using OOD test sets, and further standardize OOD evaluation procedure. Our benchmark provides a more rigorous and comprehensive testbed for shortcut learning in VQA. We benchmark recent methods and find that methods specifically designed for particular shortcuts fail to simultaneously generalize to our varying OOD test sets. We also systematically study the varying shortcuts and provide several valuable findings, which may promote the exploration of shortcut learning in VQA.
null
null
10.18653/v1/2022.findings-emnlp.271
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,783
inproceedings
kim-etal-2022-bridging
Bridging the Training-Inference Gap for Dense Phrase Retrieval
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.272/
Kim, Gyuwan and Lee, Jinhyuk and Oguz, Barlas and Xiong, Wenhan and Zhang, Yizhe and Mehdad, Yashar and Wang, William Yang
Findings of the Association for Computational Linguistics: EMNLP 2022
3713--3724
Building dense retrievers requires a series of standard procedures, including training and validating neural models and creating indexes for efficient search. However, these procedures are often misaligned in that training objectives do not exactly reflect the retrieval scenario at inference time. In this paper, we explore how the gap between training and inference in dense retrieval can be reduced, focusing on dense phrase retrieval (Lee et al., 2021) where billions of representations are indexed at inference. Since validating every dense retriever with a large-scale index is practically infeasible, we propose an efficient way of validating dense retrievers using a small subset of the entire corpus. This allows us to validate various training strategies including unifying contrastive loss terms and using hard negatives for phrase retrieval, which largely reduces the training-inference discrepancy. As a result, we improve top-1 phrase retrieval accuracy by 2 3 points and top-20 passage retrieval accuracy by 2 4 points for open-domain question answering. Our work urges modeling dense retrievers with careful consideration of training and inference via efficient validation while advancing phrase retrieval as a general solution for dense retrieval.
null
null
10.18653/v1/2022.findings-emnlp.272
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,784
inproceedings
yu-etal-2022-beyond
Beyond Counting Datasets: A Survey of Multilingual Dataset Construction and Necessary Resources
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.273/
Yu, Xinyan and Chatterjee, Trina and Asai, Akari and Hu, Junjie and Choi, Eunsol
Findings of the Association for Computational Linguistics: EMNLP 2022
3725--3743
While the NLP community is generally aware of resource disparities among languages, we lack research that quantifies the extent and types of such disparity. Prior surveys estimating the availability of resources based on the number of datasets can be misleading as dataset quality varies: many datasets are automatically induced or translated from English data. To provide a more comprehensive picture of language resources, we examine the characteristics of 156 publicly available NLP datasets. We manually annotate how they are created, including input text and label sources and tools used to build them, and what they study, tasks they address and motivations for their creation. After quantifying the qualitative NLP resource gap across languages, we discuss how to improve data collection in low-resource languages. We survey language-proficient NLP researchers and crowd workers per language, finding that their estimated availability correlates with dataset availability. Through crowdsourcing experiments, we identify strategies for collecting high-quality multilingual data on the Mechanical Turk platform. We conclude by making macro and micro-level suggestions to the NLP community and individual researchers for future multilingual data development.
null
null
10.18653/v1/2022.findings-emnlp.273
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,785
inproceedings
peng-etal-2022-ernie
{ERNIE}-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.274/
Peng, Qiming and Pan, Yinxu and Wang, Wenjin and Luo, Bin and Zhang, Zhenyu and Huang, Zhengjie and Cao, Yuhui and Yin, Weichong and Chen, Yongfeng and Zhang, Yin and Feng, Shikun and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng
Findings of the Association for Computational Linguistics: EMNLP 2022
3744--3756
Recent years have witnessed the rise and success of pre-training techniques in visually-rich document understanding. However, most existing methods lack the systematic mining and utilization of layout-centered knowledge, leading to sub-optimal performances. In this paper, we propose ERNIE-Layout, a novel document pre-training solution with layout knowledge enhancement in the whole workflow, to learn better representations that combine the features from text, layout, and image. Specifically, we first rearrange input sequences in the serialization stage, and then present a correlative pre-training task, reading order prediction, to learn the proper reading order of documents. To improve the layout awareness of the model, we integrate a spatial-aware disentangled attention into the multi-modal transformer and a replaced regions prediction task into the pre-training phase. Experimental results show that ERNIE-Layout achieves superior performance on various downstream tasks, setting new state-of-the-art on key information extraction, document image classification, and document question answering datasets. The code and models are publicly available at PaddleNLP.
null
null
10.18653/v1/2022.findings-emnlp.274
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,786
inproceedings
an-etal-2022-charge
Do Charge Prediction Models Learn Legal Theory?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.275/
An, Zhenwei and Huang, Quzhe and Jiang, Cong and Feng, Yansong and Zhao, Dongyan
Findings of the Association for Computational Linguistics: EMNLP 2022
3757--3768
The charge prediction task aims to predict the charge for a case given its fact description. Recent models have already achieved impressive accuracy in this task, however, little is understood about the mechanisms they use to perform the judgment.For practical applications, a charge prediction model should conform to the certain legal theory in civil law countries, as under the framework of civil law, all cases are judged according to certain local legal theories. In China, for example, nearly all criminal judges make decisions based on the Four Elements Theory (FET).In this paper, we argue that trustworthy charge prediction models should take legal theories into consideration, and standing on prior studies in model interpretation, we propose three principles for trustworthy models should follow in this task, which are sensitive, selective, and presumption of innocence.We further design a new framework to evaluate whether existing charge prediction models learn legal theories. Our findings indicate that, while existing charge prediction models meet the selective principle on a benchmark dataset, most of them are still not sensitive enough and do not satisfy the presumption of innocence. Our code and dataset are released at \url{https://github.com/ZhenweiAn/EXP_LJP}.
null
null
10.18653/v1/2022.findings-emnlp.275
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,787
inproceedings
bae-etal-2022-keep
Keep Me Updated! Memory Management in Long-term Conversations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.276/
Bae, Sanghwan and Kwak, Donghyun and Kang, Soyoung and Lee, Min Young and Kim, Sungdong and Jeong, Yuin and Kim, Hyeri and Lee, Sang-Woo and Park, Woomyoung and Sung, Nako
Findings of the Association for Computational Linguistics: EMNLP 2022
3769--3787
Remembering important information from the past and continuing to talk about it in the present are crucial in long-term conversations. However, previous literature does not deal with cases where the memorized information is outdated, which may cause confusion in later conversations. To address this issue, we present a novel task and a corresponding dataset of memory management in long-term conversations, in which bots keep track of and bring up the latest information about users while conversing through multiple sessions. In order to support more precise and interpretable memory, we represent memory as unstructured text descriptions of key information and propose a new mechanism of memory management that selectively eliminates invalidated or redundant information. Experimental results show that our approach outperforms the baselines that leave the stored memory unchanged in terms of engagingness and humanness, with larger performance gap especially in the later sessions.
null
null
10.18653/v1/2022.findings-emnlp.276
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,788
inproceedings
wan-etal-2022-unified
A Unified Dialogue User Simulator for Few-shot Data Augmentation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.277/
Wan, Dazhen and Zhang, Zheng and Zhu, Qi and Liao, Lizi and Huang, Minlie
Findings of the Association for Computational Linguistics: EMNLP 2022
3788--3799
Pre-trained language models have shown superior performance in task-oriented dialogues. However, existing datasets are on limited scales, which cannot support large-scale pre-training. Fortunately, various data augmentation methods have been developed to augment large-scale task-oriented dialogue corpora. However, they heavily rely on annotated data in the target domain, which require a tremendous amount of data collection and human labeling work. In this paper, we build a unified dialogue user simulation model by pre-training on several publicly available datasets. The model can then be tuned on a target domain with few-shot data. The experiments on a target dataset across multiple domains show that our proposed model brings remarkable performance increases through data augmentation.
null
null
10.18653/v1/2022.findings-emnlp.277
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,789
inproceedings
sun-etal-2022-error
An Error-Guided Correction Model for {C}hinese Spelling Error Correction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.278/
Sun, Rui and Wu, Xiuyu and Wu, Yunfang
Findings of the Association for Computational Linguistics: EMNLP 2022
3800--3810
Although existing neural network approaches have achieved great progress on Chinese spelling correction, there is still room to improve. The model is required to avoid over-correction and to distinguish a correct token from its phonological and visual similar ones. In this paper, we propose an error-guided correction model to address these issues. By borrowing the powerful ability of the pre-trained BERT model, we propose a novel zero-shot error detection method to do a preliminary detection, which guides our model to attend more on the probably wrong tokens in encoding and to avoid modifying the correct tokens in generating. Furthermore, we introduce a new loss function to integrate the error confusion set, which enables our model to distinguish similar tokens. Moreover, our model supports highly parallel decoding to meet real applications. Experiments are conducted on widely used benchmarks. Our model achieves superior performance against state-of-the-art approaches by a remarkable margin, on both the quality and computation speed.
null
null
10.18653/v1/2022.findings-emnlp.278
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,790
inproceedings
hupert-etal-2022-describing
Describing Sets of Images with Textual-{PCA}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.279/
Hupert, Oded and Schwartz, Idan and Wolf, Lior
Findings of the Association for Computational Linguistics: EMNLP 2022
3811--3821
We seek to semantically describe a set of images, capturing both the attributes of single images and the variations within the set. Our procedure is analogous to Principle Component Analysis, in which the role of projection vectors is replaced with generated phrases. First, a centroid phrase that has the largest average semantic similarity to the images in the set is generated, where both the computation of the similarity and the generation are based on pretrained vision-language models. Then, the phrase that generates the highest variation among the similarity scores is generated, using the same models. The next phrase maximizes the variance subject to being orthogonal, in the latent space, to the highest-variance phrase, and the process continues. Our experiments show that our method is able to convincingly capture the essence of image sets and describe the individual elements in a semantically meaningful way within the context of the entire set. Our code is available at: \url{https://github.com/OdedH/textual-pca}.
null
null
10.18653/v1/2022.findings-emnlp.279
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,791
inproceedings
reid-neubig-2022-learning
Learning to Model Editing Processes
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.280/
Reid, Machel and Neubig, Graham
Findings of the Association for Computational Linguistics: EMNLP 2022
3822--3832
Most existing sequence generation models produce outputs in one pass, usually left-to-right. However, this is in contrast with a more natural approach that humans use in generating content; iterative refinement and editing. Recent work has introduced edit-based models for various tasks (such as neural machine translation and text style transfer), but these generally model a single edit step. In this work, we propose modeling editing processes, modeling the whole process of iteratively generating sequences. We form a conceptual framework to describe the likelihood of multi-step edits, and describe neural models that can learn a generative model of sequences based on these multistep edits. We introduce baseline results and metrics on this task, finding that modeling editing processes improves performance on a variety of axes on both our proposed task and related downstream tasks compared to previous single-step models of edits.
null
null
10.18653/v1/2022.findings-emnlp.280
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,792
inproceedings
shen-etal-2022-palt
{PALT}: Parameter-Lite Transfer of Language Models for Knowledge Graph Completion
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.281/
Shen, Jianhao and Wang, Chenguang and Yuan, Ye and Han, Jiawei and Ji, Heng and Sen, Koushik and Zhang, Ming and Song, Dawn
Findings of the Association for Computational Linguistics: EMNLP 2022
3833--3847
This paper presents a parameter-lite transfer learning approach of pretrained language models (LM) for knowledge graph (KG) completion. Instead of finetuning, which modifies all LM parameters, we only tune a few new parameters while keeping the original LM parameters fixed. We establish this via reformulating KG completion as a {\textquotedblleft}fill-in-the-blank{\textquotedblright} task, and introducing a parameter-lite encoder on top of the original LMs. We show that, by tuning far fewer parameters than finetuning, LMs transfer non-trivially to most tasks and reach competitiveness with prior state-of-the-art approaches. For instance, we outperform the fully finetuning approaches on a KG completion benchmark by tuning only 1{\%} of the parameters.
null
null
10.18653/v1/2022.findings-emnlp.281
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,793
inproceedings
zhou-etal-2022-prompt-based
Prompt-based Connective Prediction Method for Fine-grained Implicit Discourse Relation Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.282/
Zhou, Hao and Lan, Man and Wu, Yuanbin and Chen, Yuefeng and Ma, Meirong
Findings of the Association for Computational Linguistics: EMNLP 2022
3848--3858
Due to the absence of connectives, implicit discourse relation recognition (IDRR) is still a challenging and crucial task in discourse analysis. Most of the current work adopted multitask learning to aid IDRR through explicit discourse relation recognition (EDRR) or utilized dependencies between discourse relation labels to constrain model predictions. But these methods still performed poorly on fine-grained IDRR and even utterly misidentified on most of the few-shot discourse relation classes. To address these problems, we propose a novel Prompt-based Connective Prediction (PCP) method for IDRR. Our method instructs large-scale pre-trained models to use knowledge relevant to discourse relation and utilizes the strong correlation between connectives and discourse relation to help the model recognize implicit discourse relations. Experimental results show that our method surpasses the current state-of-the-art model and achieves significant improvements on those fine-grained few-shot discourse relation. Moreover, our approach is able to be transferred to EDRR and obtain acceptable results. Our code is released in https://github.com/zh-i9/PCP-for-IDRR.
null
null
10.18653/v1/2022.findings-emnlp.282
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,794
inproceedings
kumar-etal-2022-utilizing
On Utilizing Constituent Language Resources to Improve Downstream Tasks in {H}inglish
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.283/
Kumar, Vishwajeet and Murthy, Rudra and Dhamecha, Tejas
Findings of the Association for Computational Linguistics: EMNLP 2022
3859--3865
Performance of downstream NLP tasks on code-switched Hindi-English (aka ) continues to remain a significant challenge. Intuitively, Hindi and English corpora should aid improve task performance on Hinglish. We show that meta-learning framework can effectively utilize the the labelled resources of the downstream tasks in the constituent languages. The proposed approach improves the performance on downstream tasks on code-switched language. We experiment with code-switching benchmark GLUECoS and report significant improvements.
null
null
10.18653/v1/2022.findings-emnlp.283
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,795
inproceedings
neelam-etal-2022-sygma
{SYGMA}: A System for Generalizable and Modular Question Answering Over Knowledge Bases
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.284/
Neelam, Sumit and Sharma, Udit and Karanam, Hima and Ikbal, Shajith and Kapanipathi, Pavan and Abdelaziz, Ibrahim and Mihindukulasooriya, Nandana and Lee, Young-Suk and Srivastava, Santosh and Pendus, Cezar and Dana, Saswati and Garg, Dinesh and Fokoue, Achille and Bhargav, G P Shrivatsa and Khandelwal, Dinesh and Ravishankar, Srinivas and Gurajada, Sairam and Chang, Maria and Uceda-Sosa, Rosario and Roukos, Salim and Gray, Alexander and Lima, Guilherme and Riegel, Ryan and Luus, Francois and Subramaniam, L V
Findings of the Association for Computational Linguistics: EMNLP 2022
3866--3879
Knowledge Base Question Answering (KBQA) involving complex reasoning is emerging as an important research direction. However, most KBQA systems struggle with generalizability, particularly on two dimensions: (a) across multiple knowledge bases, where existing KBQA approaches are typically tuned to a single knowledge base, and (b) across multiple reasoning types, where majority of datasets and systems have primarily focused on multi-hop reasoning. In this paper, we present SYGMA, a modular KBQA approach developed with goal of generalization across multiple knowledge bases and multiple reasoning types. To facilitate this, SYGMA is designed as two high level modules: 1) KB-agnostic question understanding module that remain common across KBs, and generates logic representation of the question with high level reasoning constructs that are extensible, and 2) KB-specific question mapping and answering module to address the KB-specific aspects of the answer extraction. We evaluated SYGMA on multiple datasets belonging to distinct knowledge bases (DBpedia and Wikidata) and distinct reasoning types (multi-hop and temporal). State-of-the-art or competitive performances achieved on those datasets demonstrate its generalization capability.
null
null
10.18653/v1/2022.findings-emnlp.284
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,796
inproceedings
du-etal-2022-instance
Instance-Guided Prompt Learning for Few-Shot Text Matching
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.285/
Du, Jia and Zhang, Xuanyu and Wang, Siyi and Wang, Kai and Zhou, Yanquan and Li, Lei and Yang, Qing and Xu, Dongliang
Findings of the Association for Computational Linguistics: EMNLP 2022
3880--3886
Few-shot text matching is a more practical technique in natural language processing (NLP) to determine whether two texts are semantically identical. They primarily design patterns to reformulate text matching into a pre-trained task with uniform prompts across all instances. But they fail to take into account the connection between prompts and instances. This paper argues that dynamically strengthening the correlation between particular instances and the prompts is necessary because fixed prompts cannot adequately fit all diverse instances in inference. We suggest IGATE: Instance-Guided prompt leArning for few-shoT tExt matching, a novel pluggable prompt learning method. The gate mechanism used by IGATE, which is between the embedding and the PLM encoders, makes use of the semantics of instances to regulate the effects of the gate on the prompt tokens. The experimental findings show that IGATE achieves SOTA performance on MRPC and QQP, outperforming strong baselines. GitHub will host the release of codes.
null
null
10.18653/v1/2022.findings-emnlp.285
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,797
inproceedings
otmakhova-etal-2022-m3
{M}3: Multi-level dataset for Multi-document summarisation of Medical studies
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.286/
Otmakhova, Yulia and Verspoor, Karin and Baldwin, Timothy and Jimeno Yepes, Antonio and Lau, Jey Han
Findings of the Association for Computational Linguistics: EMNLP 2022
3887--3901
We present M3 (Multi-level dataset for Multi-document summarisation of Medical studies), a benchmark dataset for evaluating the quality of summarisation systems in the biomedical domain. The dataset contains sets of multiple input documents and target summaries of three levels of complexity: documents, sentences, and propositions. The dataset also includes several levels of annotation, including biomedical entities, direction, and strength of relations between them, and the discourse relationships between the input documents ({\textquotedblleft}contradiction{\textquotedblright} or {\textquotedblleft}agreement{\textquotedblright}). We showcase usage scenarios of the dataset by testing 10 generic and domain-specific summarisation models in a zero-shot setting, and introduce a probing task based on counterfactuals to test if models are aware of the direction and strength of the conclusions generated from input studies.
null
null
10.18653/v1/2022.findings-emnlp.286
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,798
inproceedings
hou-etal-2022-adapters
Adapters for Enhanced Modeling of Multilingual Knowledge and Text
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.287/
Hou, Yifan and Jiao, Wenxiang and Liu, Meizhen and Allen, Carl and Tu, Zhaopeng and Sachan, Mrinmaya
Findings of the Association for Computational Linguistics: EMNLP 2022
3902--3917
Large language models appear to learn facts from the large text corpora they are trained on. Such facts are encoded implicitly within their many parameters, making it difficult to verify or manipulate what knowledge has been learned. Language models have recently been extended to multilingual language models (MLLMs), enabling knowledge to be learned across hundreds of languages. Meanwhile, knowledge graphs contain facts in an explicit triple format, which require careful and costly curation and are only available in a few high-resource languages, restricting their research and application. To address these issues, we propose to enhance MLLMs with knowledge from multilingual knowledge graphs (MLKGs) so as to tackle language and knowledge graph tasks across many languages, including low-resource ones. Specifically, we introducea lightweight adapter set to enhance MLLMs with cross-lingual entity alignment and facts from MLKGs for many languages. Experiments on common benchmarks show that such enhancement benefits both MLLMs and MLKGs, achieving: (1) comparable or improved performance for knowledge graph completion and entity alignment relative to baselines, especially for low-resource languages (for which knowledge graphs are unavailable); and (2) improved MLLM performance on language understanding tasks that require multilingual factual knowledge; all while maintaining performance on other general language tasks.
null
null
10.18653/v1/2022.findings-emnlp.287
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,799
inproceedings
stephan-etal-2022-sepll
{S}ep{LL}: Separating Latent Class Labels from Weak Supervision Noise
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.288/
Stephan, Andreas and Kougia, Vasiliki and Roth, Benjamin
Findings of the Association for Computational Linguistics: EMNLP 2022
3918--3929
In the weakly supervised learning paradigm, labeling functions automatically assign heuristic, often noisy, labels to data samples. In this work, we provide a method for learning from weak labels by separating two types of complementary information associated with the labeling functions: information related to the target label and information specific to one labeling function only. Both types of information are reflected to different degrees by all labeled instances. In contrast to previous works that aimed at correcting or removing wrongly labeled instances, we learn a branched deep model that uses all data as-is, but splits the labeling function information in the latent space. Specifically, we propose the end-to-end model SepLL which extends a transformer classifier by introducing a latent space for labeling function specific and task-specific information. The learning signal is only given by the labeling functions matches, no pre-processing or label model is required for our method. Notably, the task prediction is made from the latent layer without any direct task signal. Experiments on Wrench text classification tasks show that our model is competitive with the state-of-the-art, and yields a new best average performance.
null
null
10.18653/v1/2022.findings-emnlp.288
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,800
inproceedings
rezaee-camacho-collados-2022-probing
Probing Relational Knowledge in Language Models via Word Analogies
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.289/
Rezaee, Kiamehr and Camacho-Collados, Jose
Findings of the Association for Computational Linguistics: EMNLP 2022
3930--3936
Understanding relational knowledge plays an integral part in natural language comprehension. When it comes to pre-trained language models (PLM), prior work has been focusing on probing relational knowledge this by filling the blanks in pre-defined prompts such as {\textquotedblleft}The capital of France is {---}''. However, these probes may be affected by the co-occurrence of target relation words and entities (e.g. {\textquotedblleft}capital{\textquotedblright}, {\textquotedblleft}France{\textquotedblright} and {\textquotedblleft}Paris{\textquotedblright}) in the pre-training corpus. In this work, we extend these probing methodologies leveraging analogical proportions as a proxy to probe relational knowledge in transformer-based PLMs without directly presenting the desired relation. In particular, we analysed the ability of PLMs to understand (1) the directionality of a given relation (e.g. Paris-France is not the same as France-Paris); (2) the ability to distinguish types on a given relation (both France and Japan are countries); and (3) the relation itself (Paris is the capital of France, but not Rome). Our results show how PLMs are extremely accurate at (1) and (2), but have clear room for improvement for (3). To better understand the reasons behind this behaviour and mistakes made by PLMs, we provide an extended quantitative analysis based on relevant factors such as frequency.
null
null
10.18653/v1/2022.findings-emnlp.289
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,801
inproceedings
zhao-etal-2022-semi
Semi-Supervised Lifelong Language Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.290/
Zhao, Yingxiu and Zheng, Yinhe and Yu, Bowen and Tian, Zhiliang and Lee, Dongkyu and Sun, Jian and Li, Yongbin and Zhang, Nevin L.
Findings of the Association for Computational Linguistics: EMNLP 2022
3937--3951
Lifelong learning aims to accumulate knowledge and alleviate catastrophic forgetting when learning tasks sequentially. However, existing lifelong language learning methods only focus on the supervised learning setting. Unlabeled data, which can be easily accessed in real-world scenarios, are underexplored. In this paper, we explore a novel setting, semi-supervised lifelong language learning (SSLL), where a model learns sequentially arriving language tasks with both labeled and unlabeled data. We propose an unlabeled data enhanced lifelong learner to explore SSLL. Specially, we dedicate task-specific modules to alleviate catastrophic forgetting and design two modules to exploit unlabeled data: (1) a virtual supervision enhanced task solver is constructed on a teacher-student framework to mine the underlying knowledge from unlabeled data; and (2) a backward augmented learner is built to encourage knowledge transfer from newly arrived unlabeled data to previous tasks. Experimental results on various language tasks demonstrate our model`s effectiveness and superiority over competitive baselines under the new setting SSLL.
null
null
10.18653/v1/2022.findings-emnlp.290
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,802
inproceedings
qi-etal-2022-parameter
Parameter-free Automatically Prompting: A Latent Pseudo Label Mapping Model for Prompt-based Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.291/
Qi, Jirui and Zhang, Richong and Chen, Junfan and Kim, Jaein and Mao, Yongyi
Findings of the Association for Computational Linguistics: EMNLP 2022
3952--3962
Prompt-based learning has achieved excellent performance in few-shot learning by mapping the outputs of the pre-trained language model to the labels with the help of a label mapping component. Existing manual label mapping (MLM) methods achieve good results but heavily rely on expensive human knowledge. Automatic label mapping (ALM) methods that learn the mapping functions with extra parameters have shown their potentiality. However, no effective ALM model comparable to MLM methods is developed yet due to the limited data. In this paper, we propose a Latent Pseudo Label Mapping (LPLM) method that optimizes the label mapping without human knowledge and extra parameters. LPLM is built upon a probabilistic latent model and is iteratively self-improved with the EM-style algorithm. The empirical results demonstrate that our LPLM method is superior to the mainstream ALM methods and significantly outperforms the SOTA method in few-shot classification tasks. Moreover, LPLM also shows impressively better performance than the vanilla MLM method which requires extra task-specific prior knowledge.
null
null
10.18653/v1/2022.findings-emnlp.291
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,803
inproceedings
zhou-etal-2022-exploring
Exploring Logographic Image for {C}hinese Aspect-based Sentiment Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.292/
Zhou, Xiabing and Feng, Renjie and Jiang, Xiaotong and Wang, Zhongqing
Findings of the Association for Computational Linguistics: EMNLP 2022
3963--3972
In logographic languages like Chinese, word meanings are constructed using specific character formations, which can help to disambiguate word senses and are beneficial for sentiment classification. However, such knowledge is rarely explored in previous sentiment analysis methods. In this paper, we focus on exploring the logographic information for aspect-based sentiment classification in Chinese text. Specifically, we employ a logographic image to capture an internal morphological structure from the character sequence. The logographic image is also used to learn the external relations among context and aspect words. Furthermore, we propose a multimodal language model to explicitly incorporate a logographic image with review text for aspect-based sentiment classification in Chinese. Experimental results show that our method brings substantial performance improvement over strong baselines. The results also indicate that the logographic image is very important for exploring the internal structure and external relations from the character sequence.
null
null
10.18653/v1/2022.findings-emnlp.292
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,804
inproceedings
artetxe-etal-2022-role
On the Role of Bidirectionality in Language Model Pre-Training
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.293/
Artetxe, Mikel and Du, Jingfei and Goyal, Naman and Zettlemoyer, Luke and Stoyanov, Veselin
Findings of the Association for Computational Linguistics: EMNLP 2022
3973--3985
Prior work on language model pre-training has explored different architectures and learning objectives, but differences in data, hyperparameters and evaluation make a principled comparison difficult. In this work, we focus on bidirectionality as a key factor that differentiates existing approaches, and present a comprehensive study of its role in next token prediction, text infilling, zero-shot priming and fine-tuning. We propose a new framework that generalizes prior approaches, including fully unidirectional models like GPT, fully bidirectional models like BERT, and hybrid models like CM3 and prefix LM. Our framework distinguishes between two notions of bidirectionality (bidirectional context and bidirectional attention) and allows us to control each of them separately. We find that the optimal configuration is largely application-dependent (e.g., bidirectional attention is beneficial for fine-tuning and infilling, but harmful for next token prediction and zero-shot priming). We train models with up to 6.7B parameters, and find differences to remain consistent at scale. While prior work on scaling has focused on left-to-right autoregressive models, our results suggest that this approach comes with some trade-offs, and it might be worthwhile to develop very large bidirectional models.
null
null
10.18653/v1/2022.findings-emnlp.293
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,805
inproceedings
jukic-etal-2022-talk
You Are What You Talk About: Inducing Evaluative Topics for Personality Analysis
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.294/
Juki{\'c}, Josip and Vukojevi{\'c}, Iva and Snajder, Jan
Findings of the Association for Computational Linguistics: EMNLP 2022
3986--3999
Expressing attitude or stance toward entities and concepts is an integral part of human behavior and personality. Recently, evaluative language data has become more accessible with social media`s rapid growth, enabling large-scale opinion analysis. However, surprisingly little research examines the relationship between personality and evaluative language. To bridge this gap, we introduce the notion of evaluative topics, obtained by applying topic models to pre-filtered evaluative text from social media. We then link evaluative topics to individual text authors to build their evaluative profiles. We apply evaluative profiling to Reddit comments labeled with personality scores and conduct an exploratory study on the relationship between evaluative topics and Big Five personality facets, aiming for a more interpretable, facet-level analysis. Finally, we validate our approach by observing correlations consistent with prior research in personality psychology.
null
null
10.18653/v1/2022.findings-emnlp.294
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,806
inproceedings
chen-etal-2022-cat
{CAT}-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.295/
Chen, Nuo and Sun, Qiushi and Zhu, Renyu and Li, Xiang and Lu, Xuesong and Gao, Ming
Findings of the Association for Computational Linguistics: EMNLP 2022
4000--4008
Code pre-trained models (CodePTMs) have recently demonstrated significant success in code intelligence. To interpret these models, some probing methods have been applied. However, these methods fail to consider the inherent characteristics of codes. In this paper, to address the problem, we propose a novel probing method CAT-probing to quantitatively interpret how CodePTMs attend code structure. We first denoise the input code sequences based on the token types pre-defined by the compilers to filter those tokens whose attention scores are too small. After that, we define a new metric CAT-score to measure the commonality between the token-level attention scores generated in CodePTMs and the pair-wise distances between corresponding AST nodes. The higher the CAT-score, the stronger the ability of CodePTMs to capture code structure. We conduct extensive experiments to integrate CAT-probing with representative CodePTMs for different programming languages. Experimental results show the effectiveness of CAT-probing in CodePTM interpretation. Our codes and data are publicly available at https://github.com/nchen909/CodeAttention.
null
null
10.18653/v1/2022.findings-emnlp.295
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,807
inproceedings
adams-etal-2022-learning
Learning to Revise References for Faithful Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.296/
Adams, Griffin and Shing, Han-Chin and Sun, Qing and Winestock, Christopher and McKeown, Kathleen and Elhadad, No{\'e}mie
Findings of the Association for Computational Linguistics: EMNLP 2022
4009--4027
In real-world scenarios with naturally occurring datasets, reference summaries are noisy and may contain information that cannot be inferred from the source text. On large news corpora, removing low quality samples has been shown to reduce model hallucinations. Yet, for smaller, and/or noisier corpora, filtering is detrimental to performance. To improve reference quality while retaining all data, we propose a new approach: to selectively re-write unsupported reference sentences to better reflect source data. We automatically generate a synthetic dataset of positive and negative revisions by corrupting supported sentences and learn to revise reference sentences with contrastive learning. The intensity of revisions is treated as a controllable attribute so that, at inference, diverse candidates can be over-generated-then-rescored to balance faithfulness and abstraction. To test our methods, we extract noisy references from publicly available MIMIC-III discharge summaries for the task of hospital-course summarization, and vary the data on which models are trained. According to metrics and human evaluation, models trained on revised clinical references are much more faithful, informative, and fluent than models trained on original or filtered data.
null
null
10.18653/v1/2022.findings-emnlp.296
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,808
inproceedings
ji-2022-towards
Towards Intention Understanding in Suicidal Risk Assessment with Natural Language Processing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.297/
Ji, Shaoxiong
Findings of the Association for Computational Linguistics: EMNLP 2022
4028--4038
Recent applications of natural language processing techniques to suicidal ideation detection and risk assessment frame the detection or assessment task as a text classification problem. Recent advances have developed many models, especially deep learning models, to boost predictive performance.Though the performance (in terms of aggregated evaluation scores) is improving, this position paper urges that better intention understanding is required for reliable suicidal risk assessment with computational methods. This paper reflects the state of natural language processing applied to suicide-associated text classification tasks, differentiates suicidal risk assessment and intention understanding, and points out potential limitations of sentiment features and pretrained language models in suicidal intention understanding.Besides, it urges the necessity for sequential intention understanding and risk assessment, discusses some critical issues in evaluation such as uncertainty, and studies the lack of benchmarks.
null
null
10.18653/v1/2022.findings-emnlp.297
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,809
inproceedings
zhao-etal-2022-impact
On the Impact of Temporal Concept Drift on Model Explanations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.298/
Zhao, Zhixue and Chrysostomou, George and Bontcheva, Kalina and Aletras, Nikolaos
Findings of the Association for Computational Linguistics: EMNLP 2022
4039--4054
Explanation faithfulness of model predictions in natural language processing is typically evaluated on held-out data from the same temporal distribution as the training data (i.e. synchronous settings). While model performance often deteriorates due to temporal variation (i.e. temporal concept drift), it is currently unknown how explanation faithfulness is impacted when the time span of the target data is different from the data used to train the model (i.e. asynchronous settings). For this purpose, we examine the impact of temporal variation on model explanations extracted by eight feature attribution methods and three select-then-predict models across six text classification tasks. Our experiments show that (i) faithfulness is not consistent under temporal variations across feature attribution methods (e.g. it decreases or increases depending on the method), with an attention-based method demonstrating the most robust faithfulness scores across datasets; and (ii) select-then-predict models are mostly robust in asynchronous settings with only small degradation in predictive performance. Finally, feature attribution methods show conflicting behavior when used in FRESH (i.e. a select-and-predict model) and for measuring sufficiency/comprehensiveness (i.e. as post-hoc methods), suggesting that we need more robust metrics to evaluate post-hoc explanation faithfulness. Code will be made publicly available.
null
null
10.18653/v1/2022.findings-emnlp.298
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,810
inproceedings
nukrai-etal-2022-text
Text-Only Training for Image Captioning using Noise-Injected {CLIP}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.299/
Nukrai, David and Mokady, Ron and Globerson, Amir
Findings of the Association for Computational Linguistics: EMNLP 2022
4055--4063
We consider the task of image-captioning using only the CLIP model and additional text data at training time and no additional captioned images. Our approach relies on the fact that CLIP is trained to make visual and textual embeddings similar. Therefore, we only need to learn how to translate CLIP textual embeddings back into text, and we can learn how to do this by learning a decoder for the frozen CLIP text encoder using only text. We argue that this intuition is {\textquotedblleft}almost correct{\textquotedblright} because of a gap between the embedding spaces, and propose to rectify this via noise injection during training. We demonstrate the effectiveness of our approach by showing SOTA zero-shot image captioning across four benchmarks, including style transfer. Code, data, and models are available at https://github.com/DavidHuji/CapDec.
null
null
10.18653/v1/2022.findings-emnlp.299
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,811
inproceedings
zhong-etal-2022-improving
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.300/
Zhong, Qihuang and Ding, Liang and Shen, Li and Mi, Peng and Liu, Juhua and Du, Bo and Tao, Dacheng
Findings of the Association for Computational Linguistics: EMNLP 2022
4064--4085
Fine-tuning large pretrained language models on a limited training corpus usually suffers from poor generalization. Prior works show that the recently-proposed sharpness-aware minimization (SAM) optimization method can improve the model generalization. However, SAM adds a perturbation to each model parameter equally (but not all parameters contribute equally to the optimization of training), which we argue is sub-optimal and will lead to excessive computation. In this paper, we propose a novel optimization procedure, namely FSAM, which introduces a Fisher mask to improve the efficiency and performance of SAM. In short, instead of adding perturbation to all parameters, FSAM uses the Fisher information to identity the important parameters and formulates a Fisher mask to obtain the sparse perturbation, i.e., making the optimizer focus on these important parameters. Experiments on various tasks in GLUE and SuperGLUE benchmarks show that FSAM consistently outperforms the vanilla SAM by 0.67 1.98 average score among four different pretrained models. We also empirically show that FSAM works well in other complex scenarios, e.g., fine-tuning on generation tasks or limited training data. Encouragingly, when training data is limited, FSAM improves the SAM by a large margin, i.e., up to 15.1.
null
null
10.18653/v1/2022.findings-emnlp.300
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,812
inproceedings
helwe-etal-2022-tina
{TINA}: Textual Inference with Negation Augmentation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.301/
Helwe, Chadi and Coumes, Simon and Clavel, Chlo{\'e} and Suchanek, Fabian
Findings of the Association for Computational Linguistics: EMNLP 2022
4086--4099
Transformer-based language models achieve state-of-the-art results on several natural language processing tasks. One of these is textual entailment, i.e., the task of determining whether a premise logically entails a hypothesis. However, the models perform poorly on this task when the examples contain negations. In this paper, we propose a new definition of textual entailment that captures also negation. This allows us to develop TINA (Textual Inference with Negation Augmentation), a principled technique for negated data augmentation that can be combined with the unlikelihood loss function.Our experiments with different transformer-based models show that our method can significantly improve the performance of the models on textual entailment datasets with negation {--} without sacrificing performance on datasets without negation.
null
null
10.18653/v1/2022.findings-emnlp.301
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,813
inproceedings
huang-etal-2022-mixed
Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in {O}pen{QA}
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.303/
Huang, Junjie and Zhong, Wanjun and Liu, Qian and Gong, Ming and Jiang, Daxin and Duan, Nan
Findings of the Association for Computational Linguistics: EMNLP 2022
4117--4129
Retrieving evidences from tabular and textual resources is essential for open-domain question answering (OpenQA), which provides more comprehensive information. However, training an effective dense table-text retriever is difficult due to the challenges of table-text discrepancy and data sparsity problem. To address the above challenges, we introduce an optimized OpenQA Table-Text Retriever (OTTeR) to jointly retrieve tabular and textual evidences. Firstly, we propose to enhance mixed-modality representation learning via two mechanisms: modality-enhanced representation and mixed-modality negative sampling strategy. Secondly, to alleviate data sparsity problem and enhance the general retrieval ability, we conduct retrieval-centric mixed-modality synthetic pre-training. Experimental results demonstrate that OTTeR substantially improves the performance of table-and-text retrieval on the OTT-QA dataset. Comprehensive analyses examine the effectiveness of all the proposed mechanisms. Besides, equipped with OTTeR, our OpenQA system achieves the state-of-the-art result on the downstream QA task, with 10.1{\%} absolute improvement in terms of the exact match over the previous best system.
null
null
10.18653/v1/2022.findings-emnlp.303
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,815
inproceedings
ravishankar-nivre-2022-effects
The Effects of Corpus Choice and Morphosyntax on Multilingual Space Induction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.304/
Ravishankar, Vinit and Nivre, Joakim
Findings of the Association for Computational Linguistics: EMNLP 2022
4130--4139
In an effort to study the inductive biases of language models, numerous studies have attempted to use linguistically motivated tasks as a proxy of sorts, wherein performance on these tasks would imply an inductive bias towards a specific linguistic phenomenon. In this study, we attempt to analyse the inductive biases of language models with respect to natural language phenomena, in the context of building multilingual embedding spaces.We sample corpora from 2 sources in 15 languages and train language models on pseudo-bilingual variants of each corpus, created by duplicating each corpus and shifting token indices for half the resulting corpus. We evaluate the cross-lingual capabilities of these LMs, and show that while correlations with language families tend to be weak, other corpus-level characteristics, such as type-token ratio, tend to be more strongly correlated. Finally, we show that multilingual spaces can be built, albeit less effectively, even when additional destructive perturbations are applied to the training corpora, implying that (effectively) bag-of-words models also have an inductive bias that is sufficient for inducing multilingual spaces.
null
null
10.18653/v1/2022.findings-emnlp.304
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,816
inproceedings
sun-etal-2022-modeling
Modeling Complex Dialogue Mappings via Sentence Semantic Segmentation Guided Conditional Variational Auto-Encoder
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.305/
Sun, Bin and Feng, Shaoxiong and Li, Yiwei and Wang, Weichao and Mi, Fei and Li, Yitong and Li, Kan
Findings of the Association for Computational Linguistics: EMNLP 2022
4140--4153
Complex dialogue mappings (CDM), including one-to-many and many-to-one mappings, tend to make dialogue models generate incoherent or dull responses, and modeling these mappings remains a huge challenge for neural dialogue systems. To alleviate these problems, methods like introducing external information, reconstructing the optimization function, and manipulating data samples are proposed, while they primarily focus on avoiding training with CDM, inevitably weakening the model`s ability of understanding CDM in human conversations and limiting further improvements in model performance. This paper proposes a Sentence Semantic Segmentation guided Conditional Variational Auto-Encoder (SegCVAE) method which can model and take advantages of the CDM data. Specifically, to tackle the incoherent problem caused by one-to-many, SegCVAE uses response-related prominent semantics to constrained the latent variable. To mitigate the non-diverse problem brought by many-to-one, SegCVAE segments multiple prominent semantics to enrich the latent variables. Three novel components, Internal Separation, External Guidance, and Semantic Norms, are proposed to achieve SegCVAE. On dialogue generation tasks, both the automatic and human evaluation results show that SegCVAE achieves new state-of-the-art performance.
null
null
10.18653/v1/2022.findings-emnlp.305
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,817
inproceedings
marro-etal-2022-graph
Graph Embeddings for Argumentation Quality Assessment
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.306/
Marro, Santiago and Cabrio, Elena and Villata, Serena
Findings of the Association for Computational Linguistics: EMNLP 2022
4154--4164
Argumentation is used by people both internally, by evaluating arguments and counterarguments to make sense of a situation and take a decision, and externally, e.g., in a debate, by exchanging arguments to reach an agreement or to promote an individual position. In this context, the assessment of the quality of the arguments is of extreme importance, as it strongly influences the evaluation of the overall argumentation, impacting on the decision making process. The automatic assessment of the quality of natural language arguments is recently attracting interest in the Argument Mining field. However, the issue of automatically assessing the quality of an argumentation largely remains a challenging unsolved task. Our contribution is twofold: first, we present a novel resource of 402 student persuasive essays, where three main quality dimensions (i.e., cogency, rhetoric, and reasonableness) have been annotated, leading to 1908 arguments tagged with quality facets; second, we address this novel task of argumentation quality assessment proposing a novel neural architecture based on graph embeddings, that combines both the textual features of the natural language arguments and the overall argument graph, i.e., considering also the support and attack relations holding among the arguments. Results on the persuasive essays dataset outperform state-of-the-art and standard baselines' performance.
null
null
10.18653/v1/2022.findings-emnlp.306
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,818
inproceedings
peng-etal-2022-smile
{SM}i{LE}: Schema-augmented Multi-level Contrastive Learning for Knowledge Graph Link Prediction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.307/
Peng, Miao and Liu, Ben and Xie, Qianqian and Xu, Wenjie and Wang, Hua and Peng, Min
Findings of the Association for Computational Linguistics: EMNLP 2022
4165--4177
Link prediction is the task of inferring missing links between entities in knowledge graphs. Embedding-based methods have shown effectiveness in addressing this problem by modeling relational patterns in triples. However, the link prediction task often requires contextual information in entity neighborhoods, while most existing embedding-based methods fail to capture it. Additionally, little attention is paid to the diversity of entity representations in different contexts, which often leads to false prediction results. In this situation, we consider that the schema of knowledge graph contains the specific contextual information, and it is beneficial for preserving the consistency of entities across contexts. In this paper, we propose a novel Schema-augmented Multi-level contrastive LEarning framework (SMiLE) to conduct knowledge graph link prediction. Specifically, we first exploit network schema as the prior constraint to sample negatives and pre-train our model by employing a multi-level contrastive learning method to yield both prior schema and contextual information. Then we fine-tune our model under the supervision of individual triples to learn subtler representations for link prediction. Extensive experimental results on four knowledge graph datasets with thorough analysis of each component demonstrate the effectiveness of our proposed framework against state-of-the-art baselines. The implementation of SMiLE is available at https://github.com/GKNL/SMiLE.
null
null
10.18653/v1/2022.findings-emnlp.307
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,819
inproceedings
qiu-etal-2022-multilingual
Multilingual Multimodal Learning with Machine Translated Text
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.308/
Qiu, Chen and Oneaț{\u{a}}, Dan and Bugliarello, Emanuele and Frank, Stella and Elliott, Desmond
Findings of the Association for Computational Linguistics: EMNLP 2022
4178--4193
Most vision-and-language pretraining research focuses on English tasks. However, the creation of multilingual multimodal evaluation datasets (e.g. Multi30K, xGQA, XVNLI, and MaRVL) poses a new challenge in finding high-quality training data that is both multilingual and multimodal. In this paper, we investigate whether machine translating English multimodal data can be an effective proxy for the lack of readily available multilingual data. We call this framework TD-MML: Translated Data for Multilingual Multimodal Learning, and it can be applied to any multimodal dataset and model. We apply it to both pretraining and fine-tuning data with a state-of-the-art model. In order to prevent models from learning from low-quality translated text, we propose two metrics for automatically removing such translations from the resulting datasets. In experiments on five tasks across 20 languages in the IGLUE benchmark, we show that translated data can provide a useful signal for multilingual multimodal learning, both at pretraining and fine-tuning.
null
null
10.18653/v1/2022.findings-emnlp.308
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,820
inproceedings
zhuang-etal-2022-learning
Learning From the Source Document: Unsupervised Abstractive Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.309/
Zhuang, Haojie and Zhang, Wei Emma and Yang, Jian and Ma, Congbo and Qu, Yutong and Sheng, Quan Z.
Findings of the Association for Computational Linguistics: EMNLP 2022
4194--4205
Most of the state-of-the-art methods for abstractive text summarization are under supervised learning settings, while heavily relying on high-quality and large-scale parallel corpora. In this paper, we remove the need for reference summaries and present an unsupervised learning method SCR (Summarize, Contrast and Review) for abstractive summarization, which leverages contrastive learning and is the first work to apply contrastive learning for unsupervised abstractive summarization. Particularly, we use the true source documents as positive source document examples, and strategically generated fake source documents as negative source document examples to train the model to generate good summaries. Furthermore, we consider and improve the writing quality of the generated summaries by guiding them to be similar to human-written texts. The promising results on extensive experiments show that SCR outperforms other unsupervised abstractive summarization baselines, which demonstrates its effectiveness.
null
null
10.18653/v1/2022.findings-emnlp.309
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,821
inproceedings
arviv-tsur-2022-things
How to Do Things without Words: Modeling Semantic Drift of Emoji
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.310/
Arviv, Eyal and Tsur, Oren
Findings of the Association for Computational Linguistics: EMNLP 2022
4206--4211
Emoji have become a significant part of our informal textual communication. Previous work, addressing the societal and linguistic functions of emoji, overlooked the relation between the semantics and the visual variations of the symbols. In this paper we model and analyze the semantic drift of emoji and discuss the features that may be contributing to the drift, some are unique to emoji and some are more general. Specifically, we explore the relations between graphical changes and semantic changes.
null
null
10.18653/v1/2022.findings-emnlp.310
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,822
inproceedings
husse-spitz-2022-mind
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.311/
Husse, Silke and Spitz, Andreas
Findings of the Association for Computational Linguistics: EMNLP 2022
4212--4234
The awareness and mitigation of biases are of fundamental importance for the fair and transparent use of contextual language models, yet they crucially depend on the accurate detection of biases as a precursor. Consequently, numerous bias detection methods have been proposed, which vary in their approach, the considered type of bias, and the data used for evaluation. However, while most detection methods are derived from the word embedding association test for static word embeddings, the reported results are heterogeneous, inconsistent, and ultimately inconclusive. To address this issue, we conduct a rigorous analysis and comparison of bias detection methods for contextual language models. Our results show that minor design and implementation decisions (or errors) have a substantial and often significant impact on the derived bias scores. Overall, we find the state of the field to be both worse than previously acknowledged due to systematic and propagated errors in implementations, yet better than anticipated since divergent results in the literature homogenize after accounting for implementation errors. Based on our findings, we conclude with a discussion of paths towards more robust and consistent bias detection methods.
null
null
10.18653/v1/2022.findings-emnlp.311
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,823
inproceedings
xu-etal-2022-zeroprompt
{Z}ero{P}rompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.312/
Xu, Hanwei and Chen, Yujun and Du, Yulun and Shao, Nan and Yanggang, Wang and Li, Haiyu and Yang, Zhilin
Findings of the Association for Computational Linguistics: EMNLP 2022
4235--4252
We propose a multitask pretraining approach ZeroPrompt for zero-shot generalization, focusing on task scaling and zero-shot prompting.While previous models are trained on only a few dozen tasks, we scale to 1,000 tasks for the first time using real-world data. This leads to a crucial discovery that task scaling can be an efficient alternative to model scaling; i.e., the model size has less impact on performance with an extremely large number of tasks. Our results show that task scaling can improve training efficiency by 30 times in FLOPs.Empirically, ZeroPrompt substantially improves both the efficiency and the performance of zero-shot learning across a variety of academic and production datasets.
null
null
10.18653/v1/2022.findings-emnlp.312
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,824
inproceedings
conia-etal-2022-semantic
Semantic Role Labeling Meets Definition Modeling: Using Natural Language to Describe Predicate-Argument Structures
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.313/
Conia, Simone and Barba, Edoardo and Scir{\`e}, Alessandro and Navigli, Roberto
Findings of the Association for Computational Linguistics: EMNLP 2022
4253--4270
One of the common traits of past and present approaches for Semantic Role Labeling (SRL) is that they rely upon discrete labels drawn from a predefined linguistic inventory to classify predicate senses and their arguments.However, we argue this need not be the case. In this paper, we present an approach that leverages Definition Modeling to introduce a generalized formulation of SRL as the task of describing predicate-argument structures using natural language definitions instead of discrete labels. Our novel formulation takes a first step towards placing interpretability and flexibility foremost, and yet our experiments and analyses on PropBank-style and FrameNet-style, dependency-based and span-based SRL also demonstrate that a flexible model with an interpretable output does not necessarily come at the expense of performance. We release our software for research purposes at https://github.com/SapienzaNLP/dsrl.
null
null
10.18653/v1/2022.findings-emnlp.313
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,825
inproceedings
fuster-baggetto-fresno-2022-anisotropy
Is anisotropy really the cause of {BERT} embeddings not being semantic?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.314/
Fuster Baggetto, Alejandro and Fresno, Victor
Findings of the Association for Computational Linguistics: EMNLP 2022
4271--4281
In this paper we conduct a set of experiments aimed to improve our understanding of the lack of semantic isometry in BERT, i.e. the lack of correspondence between the embedding and meaning spaces of its contextualized word representations. Our empirical results show that, contrary to popular belief, the anisotropy is not the root cause of the poor performance of these contextual models' embeddings in semantic tasks. What does affect both the anisotropy and semantic isometry is a set of known biases: frequency, subword, punctuation, and case. For each one of them, we measure its magnitude and the effect of its removal, showing that these biases contribute but do not completely explain the phenomenon of anisotropy and lack of semantic isometry of these contextual language models.
null
null
10.18653/v1/2022.findings-emnlp.314
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,826
inproceedings
lai-etal-2022-m4
m$^4$ Adapter: Multilingual Multi-Domain Adaptation for Machine Translation with a Meta-Adapter
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.315/
Lai, Wen and Chronopoulou, Alexandra and Fraser, Alexander
Findings of the Association for Computational Linguistics: EMNLP 2022
4282--4296
Multilingual neural machine translation models (MNMT) yield state-of-the-art performance when evaluated on data from a domain and language pair seen at training time. However, when a MNMT model is used to translate under domain shift or to a new language pair, performance drops dramatically. We consider a very challenging scenario: adapting the MNMT model both to a new domain and to a new language pair at the same time. In this paper, we propose m{\textasciicircum}4Adapter (Multilingual Multi-Domain Adaptation for Machine Translation with a Meta-Adapter), which combines domain and language knowledge using meta-learning with adapters. We present results showing that our approach is a parameter-efficient solution which effectively adapts a model to both a new language pair and a new domain, while outperforming other adapter methods. An ablation study also shows that our approach more effectively transfers domain knowledge across different languages and language information across different domains.
null
null
10.18653/v1/2022.findings-emnlp.315
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,827
inproceedings
shen-etal-2022-textual
Textual Enhanced Contrastive Learning for Solving Math Word Problems
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.316/
Shen, Yibin and Liu, Qianying and Mao, Zhuoyuan and Cheng, Fei and Kurohashi, Sadao
Findings of the Association for Computational Linguistics: EMNLP 2022
4297--4307
Solving math word problems is the task that analyses the relation of quantities e and requires an accurate understanding of contextual natural language information. Recent studies show that current models rely on shallow heuristics to predict solutions and could be easily misled by small textual perturbations. To address this problem, we propose a Textual Enhanced Contrastive Learning framework, which enforces the models to distinguish semantically similar examples while holding different mathematical logic. We adopt a self-supervised manner strategy to enrich examples with subtle textual variance by textual reordering or problem re-construction. We then retrieve the hardest to differentiate samples from both equation and textual perspectives and guide the model to learn their representations. Experimental results show that our method achieves state-of-the-art on both widely used benchmark datasets and also exquisitely designed challenge datasets in English and Chinese.
null
null
10.18653/v1/2022.findings-emnlp.316
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,828
inproceedings
mohammadshahi-etal-2022-compressed
What Do Compressed Multilingual Machine Translation Models Forget?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.317/
Mohammadshahi, Alireza and Nikoulina, Vassilina and Berard, Alexandre and Brun, Caroline and Henderson, James and Besacier, Laurent
Findings of the Association for Computational Linguistics: EMNLP 2022
4308--4329
Recently, very large pre-trained models achieve state-of-the-art results in various natural language processing (NLP) tasks, but their size makes it more challenging to apply them in resource-constrained environments. Compression techniques allow to drastically reduce the size of the models and therefore their inference time with negligible impact on top-tier metrics. However, the general performance averaged across multiple tasks and/or languages may hide a drastic performance drop on under-represented features, which could result in the amplification of biases encoded by the models. In this work, we assess the impact of compression methods on Multilingual Neural Machine Translation models (MNMT) for various language groups, gender, and semantic biases by extensive analysis of compressed models on different machine translation benchmarks, i.e. FLORES-101, MT-Gender, and DiBiMT. We show that the performance of under-represented languages drops significantly, while the average BLEU metric only slightly decreases. Interestingly, the removal of noisy memorization with compression leads to a significant improvement for some medium-resource languages. Finally, we demonstrate that compression amplifies intrinsic gender and semantic biases, even in high-resource languages.
null
null
10.18653/v1/2022.findings-emnlp.317
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,829
inproceedings
li-etal-2022-controllable
Controllable Dialogue Simulation with In-context Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.318/
Li, Zekun and Chen, Wenhu and Li, Shiyang and Wang, Hong and Qian, Jing and Yan, Xifeng
Findings of the Association for Computational Linguistics: EMNLP 2022
4330--4347
Building dialogue systems requires a large corpus of annotated dialogues. Such datasets are usually created via crowdsourcing, which is expensive and time-consuming. In this paper, we propose Dialogic, a novel dialogue simulation method based on large language model in-context learning to automate dataset creation. Seeded with a few annotated dialogues, Dialogic automatically selects in-context examples for demonstration and prompts GPT-3 to generate new dialogues and annotations in a controllable way. Our method can rapidly expand a small set of dialogue data with minimum or zero \textit{human involvement} and \textit{parameter update} and is thus much more cost-efficient and time-saving than crowdsourcing. Experimental results on the MultiWOZ dataset demonstrate that training a model on the simulated dialogues leads to even better performance than using the same amount of human-generated dialogues under the challenging low-resource settings, with as few as 85 dialogues as a seed. When the full training set is given, our method can still serve as an effective data augmentation method to further improve performance. Human evaluation results also show that our simulated dialogues have near-human fluency and annotation accuracy. The code and data are available at \textbf{ \url{https://github.com/Leezekun/dialogic}}.
null
null
10.18653/v1/2022.findings-emnlp.318
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,830
inproceedings
delbrouck-etal-2022-improving
Improving the Factual Correctness of Radiology Report Generation with Semantic Rewards
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.319/
Delbrouck, Jean-Benoit and Chambon, Pierre and Bluethgen, Christian and Tsai, Emily and Almusa, Omar and Langlotz, Curtis
Findings of the Association for Computational Linguistics: EMNLP 2022
4348--4360
Neural image-to-text radiology report generation systems offer the potential to improve radiology reporting by reducing the repetitive process of report drafting and identifying possible medical errors. These systems have achieved promising performance as measured by widely used NLG metrics such as BLEU and CIDEr. However, the current systems face important limitations. First, they present an increased complexity in architecture that offers only marginal improvements on NLG metrics. Secondly, these systems that achieve high performance on these metrics are not always factually complete or consistent due to both inadequate training and evaluation. Recent studies have shown the systems can be substantially improved by using new methods encouraging 1) the generation of domain entities consistent with the reference and 2) describing these entities in inferentially consistent ways. So far, these methods rely on weakly-supervised approaches (rule-based) and named entity recognition systems that are not specific to the chest X-ray domain. To overcome this limitation, we propose a new method, the RadGraph reward, to further improve the factual completeness and correctness of generated radiology reports. More precisely, we leverage the RadGraph dataset containing annotated chest X-ray reports with entities and relations between entities. On two open radiology report datasets, our system substantially improves the scores up to 14.2{\%} and 25.3{\%} on metrics evaluating the factual correctness and completeness of reports.
null
null
10.18653/v1/2022.findings-emnlp.319
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,831
inproceedings
dankers-titov-2022-recursive
Recursive Neural Networks with Bottlenecks Diagnose (Non-)Compositionality
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.320/
Dankers, Verna and Titov, Ivan
Findings of the Association for Computational Linguistics: EMNLP 2022
4361--4378
A recent line of work in NLP focuses on the (dis)ability of models to generalise compositionally for artificial languages.However, when considering natural language tasks, the data involved is not strictly, or locally, compositional.Quantifying the compositionality of data is a challenging task, which has been investigated primarily for short utterances.We use recursive neural models (Tree-LSTMs) with bottlenecks that limit the transfer of information between nodes.We illustrate that comparing data`s representations in models with and without the bottleneck can be used to produce a compositionality metric.The procedure is applied to the evaluation of arithmetic expressions using synthetic data, and sentiment classification using natural language data.We demonstrate that compression through a bottleneck impacts non-compositional examples disproportionatelyand then use the bottleneck compositionality metric (BCM) to distinguish compositional from non-compositional samples, yielding a compositionality ranking over a dataset.
null
null
10.18653/v1/2022.findings-emnlp.320
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,832
inproceedings
fekih-etal-2022-humset
{H}um{S}et: Dataset of Multilingual Information Extraction and Classification for Humanitarian Crises Response
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.321/
Fekih, Selim and Tamagnone, Nicolo{'} and Minixhofer, Benjamin and Shrestha, Ranjan and Contla, Ximena and Oglethorpe, Ewan and Rekabsaz, Navid
Findings of the Association for Computational Linguistics: EMNLP 2022
4379--4389
Timely and effective response to humanitarian crises requires quick and accurate analysis of large amounts of text data {--} a process that can highly benefit from expert-assisted NLP systems trained on validated and annotated data in the humanitarian response domain. To enable creation of such NLP systems, we introduce and release HumSet, a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. The dataset provides documents in three languages (English, French, Spanish) and covers a variety of humanitarian crises from 2018 to 2021 across the globe. For each document, HUMSET provides selected snippets (entries) as well as assigned classes to each entry annotated using common humanitarian information analysis frameworks. HUMSET also provides novel and challenging entry extraction and multi-label entry classification tasks. In this paper, we take a first step towards approaching these tasks and conduct a set of experiments on Pre-trained Language Models (PLM) to establish strong baselines for future research in this domain. The dataset is available at https://blog.thedeep.io/humset/.
null
null
10.18653/v1/2022.findings-emnlp.321
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,833
inproceedings
shao-etal-2022-viterbi
{V}iterbi Decoding of Directed Acyclic Transformer for Non-Autoregressive Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.322/
Shao, Chenze and Ma, Zhengrui and Feng, Yang
Findings of the Association for Computational Linguistics: EMNLP 2022
4390--4397
Non-autoregressive models achieve significant decoding speedup in neural machine translation but lack the ability to capture sequential dependency. Directed Acyclic Transformer (DA-Transformer) was recently proposed to model sequential dependency with a directed acyclic graph. Consequently, it has to apply a sequential decision process at inference time, which harms the global translation accuracy. In this paper, we present a Viterbi decoding framework for DA-Transformer, which guarantees to find the joint optimal solution for the translation and decoding path under any length constraint. Experimental results demonstrate that our approach consistently improves the performance of DA-Transformer while maintaining a similar decoding speedup.
null
null
10.18653/v1/2022.findings-emnlp.322
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,834
inproceedings
bandel-etal-2022-lexical
Lexical Generalization Improves with Larger Models and Longer Training
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.323/
Bandel, Elron and Goldberg, Yoav and Elazar, Yanai
Findings of the Association for Computational Linguistics: EMNLP 2022
4398--4410
While fine-tuned language models perform well on many language tasks, they were also shown to rely on superficial surface features such as lexical overlap. Excessive utilization of such heuristics can lead to failure on challenging inputs. We analyze the use of lexical overlap heuristics in natural language inference, paraphrase detection, and reading comprehension (using a novel contrastive dataset),and find that larger models are much less susceptible to adopting lexical overlap heuristics. We also find that longer training leads models to abandon lexical overlap heuristics. Finally, We provide evidence that the disparity between models size has its source in the pre-trained model.
null
null
10.18653/v1/2022.findings-emnlp.323
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,835
inproceedings
kumar-etal-2022-realistic
Realistic Data Augmentation Framework for Enhancing Tabular Reasoning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.324/
Kumar, Dibyakanti and Gupta, Vivek and Sharma, Soumya and Zhang, Shuo
Findings of the Association for Computational Linguistics: EMNLP 2022
4411--4429
Existing approaches to constructing training data for Natural Language Inference (NLI) tasks, such as for semi-structured table reasoning, are either via crowdsourcing or fully automatic methods. However, the former is expensive and time consuming and thus limits scale, and the latter often produces naive examples that may lack complex reasoning. This paper develops a realistic semi-automated framework for data augmentation for tabular inference. Instead of manually generating a hypothesis for each table, our methodology generates hypothesis templates transferable to similar tables. In addition, our framework entails the creation of rational counterfactual tables based on human written logical constraints and premise paraphrasing. For our case study, we use the INFOTABS (Gupta et al., 2020), which is an entity centric tabular inference dataset. We observed that our framework could generate human-like tabular inference examples, which could benefit training data augmentation, especially in the scenario with limited supervision.
null
null
10.18653/v1/2022.findings-emnlp.324
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,836
inproceedings
geng-etal-2022-inducing
Inducing Generalizable and Interpretable Lexica
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.325/
Geng, Yilin and Wu, Zetian and Santhosh, Roshan and Srivastava, Tejas and Ungar, Lyle and Sedoc, Jo{\~a}o
Findings of the Association for Computational Linguistics: EMNLP 2022
4430--4448
Lexica {--} words and associated scores {--} are widely used as simple, interpretable, generalizable language features to predict sentiment, emotions, mental health, and personality. They also provide insight into the psychological features behind those moods and traits. Such lexica, historically created by human experts, are valuable to linguists, psychologists, and social scientists, but they take years of refinement and have limited coverage. In this paper, we investigate how the lexica that provide psycholinguistic insights could be computationally induced and how they should be assessed. We identify generalizability and interpretability as two essential properties of such lexica. We induce lexica using both context-oblivious and context-aware approaches, compare their predictive performance both within the training corpus and across various corpora, and evaluate their quality using crowd-worker assessment. We find that lexica induced from context-oblivious models are more generalizable and interpretable than those from more accurate context-aware transformer models. In addition, lexicon scores can identify explanatory words more reliably than a high performing transformer with feature-importance measures like SHAP.
null
null
10.18653/v1/2022.findings-emnlp.325
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,837
inproceedings
sinha-etal-2022-curious
The Curious Case of Absolute Position Embeddings
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.326/
Sinha, Koustuv and Kazemnejad, Amirhossein and Reddy, Siva and Pineau, Joelle and Hupkes, Dieuwke and Williams, Adina
Findings of the Association for Computational Linguistics: EMNLP 2022
4449--4472
Transformer language models encode the notion of word order using positional information. Most commonly, this positional information is represented by absolute position embeddings (APEs), that are learned from the pretraining data. However, in natural language, it is not absolute position that matters, but relative position, and the extent to which APEs can capture this type of information has not been studied. In this work, we observe that models trained with APE over-rely on positional information to the point that they break-down when subjected to sentences with shifted position information. Specifically, when models are subjected to sentences starting from a non-zero position (excluding the effect of priming), they exhibit noticeably degraded performance on zero- to full-shot tasks, across a range of model families and model sizes. Our findings raise questions about the efficacy of APEs to model the relativity of position information, and invite further introspection on the sentence and word order processing strategies employed by these models.
null
null
10.18653/v1/2022.findings-emnlp.326
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,838
inproceedings
cao-etal-2022-goal
Goal-oriented Vision-and-Dialog Navigation via Reinforcement Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.327/
Cao, Yan and Lu, Keting and DeFazio, David and Zhang, Shiqi
Findings of the Association for Computational Linguistics: EMNLP 2022
4473--4482
Vision-and-dialog navigation is a recent benchmark for evaluating the AI capabilities of perception, interaction, and decision making. While existing methods developed for this benchmark have demonstrated great successes, they mostly rely on large datasets, where data collection can be a challenge, and the learned policies are not adaptive to domain changes. In this paper, we focus on a new problem, referred to as goal-oriented vision-and-dialog navigation (GVDN), where an agent uses reinforcement learning techniques to compute dialog-navigation policies from trial and error. A robot conducts visual navigation to locate target objects, and can talk to a remote human operator as needed. Our remote human is able to provide guidance on navigation only if the robot correctly conveys its location through dialog. Experiments have been conducted using photo-realistic simulation environments. Results suggest that, our agent outperforms competitive baselines in success rate.
null
null
10.18653/v1/2022.findings-emnlp.327
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,839
inproceedings
jena-etal-2022-leveraging
Leveraging Data Recasting to Enhance Tabular Reasoning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.328/
Jena, Aashna and Gupta, Vivek and Shrivastava, Manish and Eisenschlos, Julian
Findings of the Association for Computational Linguistics: EMNLP 2022
4483--4496
Creating challenging tabular inference data is essential for learning complex reasoning. Prior work has mostly relied on two data generation strategies. The first is human annotation, which yields linguistically diverse data but is difficult to scale. The second category for creation is synthetic generation, which is scalable and cost effective but lacks inventiveness. In this research, we present a framework for semi-automatically recasting existing tabular data to make use of the benefits of both approaches. We utilize our framework to build tabular NLI instances from five datasets that were initially intended for tasks like table2text creation, tabular Q/A, and semantic parsing. We demonstrate that recasted data could be used as evaluation benchmarks as well as augmentation data to enhance performance on tabular NLI tasks. Furthermore, we investigate the effectiveness of models trained on recasted data in the zero-shot scenario, and analyse trends in performance across different recasted datasets types.
null
null
10.18653/v1/2022.findings-emnlp.328
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,840
inproceedings
jimenez-gutierrez-etal-2022-thinking
Thinking about {GPT}-3 In-Context Learning for Biomedical {IE}? Think Again
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.329/
Jimenez Gutierrez, Bernal and McNeal, Nikolas and Washington, Clayton and Chen, You and Li, Lang and Sun, Huan and Su, Yu
Findings of the Association for Computational Linguistics: EMNLP 2022
4497--4512
Large pre-trained language models (PLMs) such as GPT-3 have shown strong in-context learning capabilities, which are highly appealing for domains such as biomedicine that feature high and diverse demands of language technologies but also high data annotation costs. In this paper, we present the first systematic and comprehensive study to compare the few-shot performance of GPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs on two representative biomedical information extraction (IE) tasks: named entity recognition and relation extraction. We follow the true few-shot setting to avoid overestimating models' few-shot performance by model selection over a large validation set. We also optimize GPT-3`s performance with known techniques such as contextual calibration and dynamic in-context example retrieval. However, our results show that GPT-3 still significantly underperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3 in-context learning also yields smaller gains in accuracy when more training data becomes available. More in-depth analyses further reveal issues of in-context learning that may be detrimental to IE tasks in general. Given the high cost of experimenting with GPT-3, we hope our study provides helpful guidance for biomedical researchers and practitioners towards more practical solutions such as fine-tuning small PLMs before better in-context learning is available for biomedical IE.
null
null
10.18653/v1/2022.findings-emnlp.329
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,841
inproceedings
lamarre-etal-2022-attention
Attention weights accurately predict language representations in the brain
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.330/
Lamarre, Mathis and Chen, Catherine and Deniz, Fatma
Findings of the Association for Computational Linguistics: EMNLP 2022
4513--4529
In Transformer-based language models (LMs) the attention mechanism converts token embeddings into contextual embeddings that incorporate information from neighboring words. The resulting contextual hidden state embeddings have enabled highly accurate models of brain responses, suggesting that the attention mechanism constructs contextual embeddings that carry information reflected in language-related brain representations. However, it is unclear whether the attention weights that are used to integrate information across words are themselves related to language representations in the brain. To address this question we analyzed functional magnetic resonance imaging (fMRI) recordings of participants reading English language narratives. We provided the narrative text as input to two LMs (BERT and GPT-2) and extracted their corresponding attention weights. We then used encoding models to determine how well attention weights can predict recorded brain responses. We find that attention weights accurately predict brain responses in much of the frontal and temporal cortices. Our results suggest that the attention mechanism itself carries information that is reflected in brain representations. Moreover, these results indicate cortical areas in which context integration may occur.
null
null
10.18653/v1/2022.findings-emnlp.330
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,842
inproceedings
zhang-etal-2022-improving-hownet
Improving {H}ow{N}et-Based {C}hinese Word Sense Disambiguation with Translations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.331/
Zhang, Xiang and Hauer, Bradley and Kondrak, Grzegorz
Findings of the Association for Computational Linguistics: EMNLP 2022
4530--4536
Word sense disambiguation (WSD) is the task of identifying the intended sense of a word in context. While prior work on unsupervised WSD has leveraged lexical knowledge bases, such as WordNet and BabelNet, these resources have proven to be less effective for Chinese. Instead, the most widely used lexical knowledge base for Chinese is HowNet. Previous HowNet-based WSD methods have not exploited contextual translation information. In this paper, we present the first HowNet-based WSD system which combines monolingual contextual information from a pretrained neural language model with bilingual information obtained via machine translation and sense translation information from HowNet. The results of our evaluation experiment on a test set from prior work demonstrate that our new method achieves a new state of the art for unsupervised Chinese WSD.
null
null
10.18653/v1/2022.findings-emnlp.331
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,843
inproceedings
gao-etal-2022-mask
Mask-then-Fill: A Flexible and Effective Data Augmentation Framework for Event Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.332/
Gao, Jun and Yu, Changlong and Wang, Wei and Zhao, Huan and Xu, Ruifeng
Findings of the Association for Computational Linguistics: EMNLP 2022
4537--4544
We present Mask-then-Fill, a flexible and effective data augmentation framework for event extraction. Our approach allows for more flexible manipulation of text and thus can generate more diverse data while keeping the original event structure unchanged as much as possible. Specifically, it first randomly masks out an adjunct sentence fragment and then infills a variable-length text span with a fine-tuned infilling model. The main advantage lies in that it can replace a fragment of arbitrary length in the text with another fragment of variable length, compared to the existing methods which can only replace a single word or a fixed-length fragment. On trigger and argument extraction tasks, the proposed framework is more effective than baseline methods and it demonstrates particularly strong results in the low-resource setting. Our further analysis shows that it achieves a good balance between diversity and distributional similarity.
null
null
10.18653/v1/2022.findings-emnlp.332
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,844
inproceedings
zhang-etal-2022-moba
{MOBA}-{E}2{C}: Generating {MOBA} Game Commentaries via Capturing Highlight Events from the Meta-Data
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.333/
Zhang, Dawei and Wu, Sixing and Guo, Yao and Chen, Xiangqun
Findings of the Association for Computational Linguistics: EMNLP 2022
4545--4556
MOBA (Multiplayer Online Battle Arena) games such as Dota2 are currently one of the most popular e-sports gaming genres. Following professional commentaries is a great way to understand and enjoy a MOBA game. However, massive game competitions lack commentaries because of the shortage of professional human commentators. As an alternative, employing machine commentators that can work at any time and place is a feasible solution. Considering the challenges in modeling MOBA games, we propose a data-driven MOBA commentary generation framework, MOBA-E2C, allowing a model to generate commentaries based on the game meta-data. Subsequently, to alleviate the burden of collecting supervised data, we propose a MOBA-FuseGPT generator to generate MOBA game commentaries by fusing the power of a rule-based generator and a generative GPT generator. Finally, in the experiments, we take a popular MOBA game Dota2 as our case and construct a Chinese Dota2 commentary generation dataset Dota2-Commentary. Experimental results demonstrate the superior performance of our approach. To the best of our knowledge, this work is the first Dota2 machine commentator and Dota2-Commentary is the first dataset.
null
null
10.18653/v1/2022.findings-emnlp.333
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,845
inproceedings
zeng-etal-2022-enhancing
Enhancing Automatic Readability Assessment with Pre-training and Soft Labels for Ordinal Regression
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.334/
Zeng, Jinshan and Xie, Yudong and Yu, Xianglong and Lee, John and Zhou, Ding-Xuan
Findings of the Association for Computational Linguistics: EMNLP 2022
4557--4568
The readability assessment task aims to assign a difficulty grade to a text. While neural models have recently demonstrated impressive performance, most do not exploit the ordinal nature of the difficulty grades, and make little effort for model initialization to facilitate fine-tuning. We address these limitations with soft labels for ordinal regression, and with model pre-training through prediction of pairwise relative text difficulty. We incorporate these two components into a model based on hierarchical attention networks, and evaluate its performance on both English and Chinese datasets. Experimental results show that our proposed model outperforms competitive neural models and statistical classifiers on most datasets.
null
null
10.18653/v1/2022.findings-emnlp.334
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,846
inproceedings
farag-etal-2022-opening
Opening up Minds with Argumentative Dialogues
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.335/
Farag, Youmna and Brand, Charlotte and Amidei, Jacopo and Piwek, Paul and Stafford, Tom and Stoyanchev, Svetlana and Vlachos, Andreas
Findings of the Association for Computational Linguistics: EMNLP 2022
4569--4582
Recent research on argumentative dialogues has focused on persuading people to take some action, changing their stance on the topic of discussion, or winning debates. In this work, we focus on argumentative dialogues that aim to open up (rather than change) people`s minds to help them become more understanding to views that are unfamiliar or in opposition to their own convictions. To this end, we present a dataset of 183 argumentative dialogues about 3 controversial topics: veganism, Brexit and COVID-19 vaccination. The dialogues were collected using the Wizard of Oz approach, where wizards leverage a knowledge-base of arguments to converse with participants. Open-mindedness is measured before and after engaging in the dialogue using a questionnaire from the psychology literature, and success of the dialogue is measured as the change in the participant`s stance towards those who hold opinions different to theirs. We evaluate two dialogue models: a Wikipedia-based and an argument-based model. We show that while both models perform closely in terms of opening up minds, the argument-based model is significantly better on other dialogue properties such as engagement and clarity.
null
null
10.18653/v1/2022.findings-emnlp.335
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,847
inproceedings
saeed-papotti-2022-type
You Are My Type! Type Embeddings for Pre-trained Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.336/
Saeed, Mohammed and Papotti, Paolo
Findings of the Association for Computational Linguistics: EMNLP 2022
4583--4598
One reason for the positive impact of Pre-trained Language Models (PLMs) in NLP tasks is their ability to encode semantic types, such as {\textquoteleft}European City' or {\textquoteleft}Woman'. While previous work has analyzed such information in the context of interpretability, it is not clear how to use types to steer the PLM output. For example, in a cloze statement, it is desirable to steer the model to generate a token that satisfies a user-specified type, e.g., predict a date rather than a location. In this work, we introduce Type Embeddings (TEs), an input embedding that promotes desired types in a PLM. Our proposal is to define a type by a small set of word examples. We empirically study the ability of TEs both in representing types and in steering masking predictions without changes to the prompt text in BERT. Finally, using the LAMA datasets, we show how TEs highly improve the precision in extracting facts from PLMs.
null
null
10.18653/v1/2022.findings-emnlp.336
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,848
inproceedings
zhao-etal-2022-generating
Generating Textual Adversaries with Minimal Perturbation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.337/
Zhao, Xingyi and Zhang, Lu and Xu, Depeng and Yuan, Shuhan
Findings of the Association for Computational Linguistics: EMNLP 2022
4599--4606
Many word-level adversarial attack approaches for textual data have been proposed in recent studies. However, due to the massive search space consisting of combinations of candidate words, the existing approaches face the problem of preserving the semantics of texts when crafting adversarial counterparts. In this paper, we develop a novel attack strategy to find adversarial texts with high similarity to the original texts while introducing minimal perturbation. The rationale is that we expect the adversarial texts with small perturbation can better preserve the semantic meaning of original texts. Experiments show that, compared with state-of-the-art attack approaches, our approach achieves higher success rates and lower perturbation rates in four benchmark datasets.
null
null
10.18653/v1/2022.findings-emnlp.337
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,849
inproceedings
engler-etal-2022-sensepolar
{S}ense{POLAR}: Word sense aware interpretability for pre-trained contextual word embeddings
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.338/
Engler, Jan and Sikdar, Sandipan and Lutz, Marlene and Strohmaier, Markus
Findings of the Association for Computational Linguistics: EMNLP 2022
4607--4619
Adding interpretability to word embeddings represents an area of active research in textrepresentation. Recent work has explored the potential of embedding words via so-called polardimensions (e.g. good vs. bad, correct vs. wrong). Examples of such recent approachesinclude SemAxis, POLAR, FrameAxis, and BiImp. Although these approaches provide interpretabledimensions for words, they have not been designed to deal with polysemy, i.e. they can not easily distinguish between different senses of words. To address this limitation, we present SensePOLAR, an extension of the original POLAR framework that enables wordsense aware interpretability for pre-trained contextual word embeddings. The resulting interpretable word embeddings achieve a level ofperformance that is comparable to original contextual word embeddings across a variety ofnatural language processing tasks including the GLUE and SQuAD benchmarks. Our workremoves a fundamental limitation of existing approaches by offering users sense aware interpretationsfor contextual word embeddings.
null
null
10.18653/v1/2022.findings-emnlp.338
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,850
inproceedings
kiehne-etal-2022-contextualizing
Contextualizing Language Models for Norms Diverging from Social Majority
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.339/
Kiehne, Niklas and Kroll, Hermann and Balke, Wolf-Tilo
Findings of the Association for Computational Linguistics: EMNLP 2022
4620--4633
To comprehensibly contextualize decisions, artificial systems in social situations need a high degree of awareness of the rules of conduct of human behavior. Especially transformer-based language models have recently been shown to exhibit some such awareness. But what if norms in some social setting do not adhere to or even blatantly deviate from the mainstream? In this paper, we introduce a novel mechanism based on deontic logic to allow for a flexible adaptation of individual norms by de-biasing training data sets and a task-reduction to textual entailment. Building on the popular {\textquoteleft}Moral Stories' dataset we on the one hand highlight the intrinsic bias of current language models, on the other hand characterize the adaptability of pre-trained models to deviating norms in fine-tuning settings.
null
null
10.18653/v1/2022.findings-emnlp.339
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,851
inproceedings
wang-etal-2022-empathetic
Empathetic Dialogue Generation via Sensitive Emotion Recognition and Sensible Knowledge Selection
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.340/
Wang, Lanrui and Li, Jiangnan and Lin, Zheng and Meng, Fandong and Yang, Chenxu and Wang, Weiping and Zhou, Jie
Findings of the Association for Computational Linguistics: EMNLP 2022
4634--4645
Empathy, which is widely used in psychological counseling, is a key trait of everyday human conversations. Equipped with commonsense knowledge, current approaches to empathetic response generation focus on capturing implicit emotion within dialogue context, where the emotions are treated as a static variable throughout the conversations. However, emotions change dynamically between utterances, which makes previous works difficult to perceive the emotion flow and predict the correct emotion of the target response, leading to inappropriate response. Furthermore, simply importing commonsense knowledge without harmonization may trigger the conflicts between knowledge and emotion, which confuse the model to choose the correct information to guide the generation process. To address the above problems, we propose a Serial Encoding and Emotion-Knowledge interaction (SEEK) method for empathetic dialogue generation. We use a fine-grained encoding strategy which is more sensitive to the emotion dynamics (emotion flow) in the conversations to predict the emotion-intent characteristic of response. Besides, we design a novel framework to model the interaction between knowledge and emotion to solve the conflicts generate more sensible response. Extensive experiments on the utterance-level annotated EMPATHETICDIALOGUES demonstrate that SEEK outperforms the strong baseline in both automatic and manual evaluations.
null
null
10.18653/v1/2022.findings-emnlp.340
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,852
inproceedings
tong-etal-2022-joint
Joint Multilingual Knowledge Graph Completion and Alignment
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.341/
Tong, Vinh and Nguyen, Dat Quoc and Huynh, Trung Thanh and Nguyen, Tam Thanh and Nguyen, Quoc Viet Hung and Niepert, Mathias
Findings of the Association for Computational Linguistics: EMNLP 2022
4646--4658
Knowledge graph (KG) alignment and completion are usually treated as two independent tasks. While recent work has leveraged entity and relation alignments from multiple KGs, such as alignments between multilingual KGs with common entities and relations, a deeper understanding of the ways in which multilingual KG completion (MKGC) can aid the creation of multilingual KG alignments (MKGA) is still limited. Motivated by the observation that structural inconsistencies {--} the main challenge for MKGA models {--} can be mitigated through KG completion methods, we propose a novel model for jointly completing and aligning knowledge graphs. The proposed model combines two components that jointly accomplish KG completion and alignment. These two components employ relation-aware graph neural networks that we propose to encode multi-hop neighborhood structures into entity and relation representations. Moreover, we also propose (i) a structural inconsistency reduction mechanism to incorporate information from the completion into the alignment component, and (ii) an alignment seed enlargement and triple transferring mechanism to enlarge alignment seeds and transfer triples during KGs alignment. Extensive experiments on a public multilingual benchmark show that our proposed model outperforms existing competitive baselines, obtaining new state-of-the-art results on both MKGC and MKGA tasks.
null
null
10.18653/v1/2022.findings-emnlp.341
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,853
inproceedings
unlu-menevse-etal-2022-framework
A Framework for Automatic Generation of Spoken Question-Answering Data
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.342/
{\"Unl{\"u Menev{\c{se, Merve and Manav, Yusufcan and Arisoy, Ebru and {\"Ozg{\"ur, Arzucan
Findings of the Association for Computational Linguistics: EMNLP 2022
4659--4666
This paper describes a framework to automatically generate a spoken question answering (QA) dataset. The framework consists of a question generation (QG) module to generate questions automatically from given text documents, a text-to-speech (TTS) module to convert the text documents into spoken form and an automatic speech recognition (ASR) module to transcribe the spoken content. The final dataset contains question-answer pairs for both the reference text and ASR transcriptions as well as the audio files corresponding to each reference text. For QG and ASR systems we used pre-trained multilingual encoder-decoder transformer models and fine-tuned these models using a limited amount of manually generated QA data and TTS-based speech data, respectively. As a proof of concept, we investigated the proposed framework for Turkish and generated the Turkish Question Answering (TurQuAse) dataset using Wikipedia articles. Manual evaluation of the automatically generated question- answer pairs and QA performance evaluation with state of-the-art models on TurQuAse show that the proposed framework is efficient for automatically generating spoken QA datasets. To the best of our knowledge, TurQuAse is the first publicly available spoken question answering dataset for Turkish. The proposed framework can be easily extended to other languages where a limited amount of QA data is available.
null
null
10.18653/v1/2022.findings-emnlp.342
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,854
inproceedings
luo-etal-2022-readability
Readability Controllable Biomedical Document Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.343/
Luo, Zheheng and Xie, Qianqian and Ananiadou, Sophia
Findings of the Association for Computational Linguistics: EMNLP 2022
4667--4680
Different from general documents, it is recognised that the ease with which people can understand a biomedical text is eminently varied, owing to the highly technical nature of biomedical documents and the variance of readers' domain knowledge. However, existing biomedical document summarization systems have paid little attention to readability control, leaving users with summaries that are incompatible with their levels of expertise.In recognition of this urgent demand, we introduce a new task of readability controllable summarization for biomedical documents, which aims to recognise users' readability demands and generate summaries that better suit their needs: technical summaries for experts and plain language summaries (PLS) for laymen.To establish this task, we construct a corpus consisting of biomedical papers with technical summaries and PLSs written by the authors, and benchmark multiple advanced controllable abstractive and extractive summarization models based on pre-trained language models (PLMs) with prevalent controlling and generation techniques.Moreover, we propose a novel masked language model (MLM) based metric and its variant to effectively evaluate the readability discrepancy between lay and technical summaries.Experimental results from automated and human evaluations show that though current control techniques allow for a certain degree of readability adjustment during generation, the performance of existing controllable summarization methods is far from desirable in this task.
null
null
10.18653/v1/2022.findings-emnlp.343
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,855
inproceedings
wortwein-etal-2022-beyond
Beyond Additive Fusion: Learning Non-Additive Multimodal Interactions
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.344/
W{\"ortwein, Torsten and Sheeber, Lisa and Allen, Nicholas and Cohn, Jeffrey and Morency, Louis-Philippe
Findings of the Association for Computational Linguistics: EMNLP 2022
4681--4696
Multimodal fusion addresses the problem of analyzing spoken words in the multimodal context, including visual expressions and prosodic cues. Even when multimodal models lead to performance improvements, it is often unclear whether bimodal and trimodal interactions are learned or whether modalities are processed independently of each other. We propose Multimodal Residual Optimization (MRO) to separate unimodal, bimodal, and trimodal interactions in a multimodal model. This improves interpretability as the multimodal interaction can be quantified. Inspired by Occam`s razor, the main intuition of MRO is that (simpler) unimodal contributions should be learned before learning (more complex) bimodal and trimodal interactions. For example, bimodal predictions should learn to correct the mistakes (residuals) of unimodal predictions, thereby letting the bimodal predictions focus on the remaining bimodal interactions. Empirically, we observe that MRO successfully separates unimodal, bimodal, and trimodal interactions while not degrading predictive performance. We complement our empirical results with a human perception study and observe that MRO learns multimodal interactions that align with human judgments.
null
null
10.18653/v1/2022.findings-emnlp.344
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,856
inproceedings
zhu-etal-2022-generalization
Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.345/
Zhu, Wang and Thomason, Jesse and Jia, Robin
Findings of the Association for Computational Linguistics: EMNLP 2022
4697--4711
For vision-and-language reasoning tasks, both fully connectionist, end-to-end methods and hybrid, neuro-symbolic methods have achieved high in-distribution performance. In which out-of-distribution settings does each paradigm excel? We investigate this question on both single-image and multi-image visual question-answering through four types of generalization tests: a novel segment-combine test for multi-image queries, contrast set, compositional generalization, and cross-benchmark transfer.Vision-and-language end-to-end trained systems exhibit sizeable performance drops across all these tests. Neuro-symbolic methods suffer even more on cross-benchmark transfer from GQA to VQA, but they show smaller accuracy drops on the other generalization tests and their performance quickly improves by few-shot training. Overall, our results demonstrate the complementary benefits of these two paradigms, and emphasize the importance of using a diverse suite of generalization tests to fully characterize model robustness to distribution shift.
null
null
10.18653/v1/2022.findings-emnlp.345
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,857
inproceedings
li-lukasiewicz-2022-learning
Learning to Model Multimodal Semantic Alignment for Story Visualization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.346/
Li, Bowen and Lukasiewicz, Thomas
Findings of the Association for Computational Linguistics: EMNLP 2022
4712--4718
Story visualization aims to generate a sequence of images to narrate each sentence in a multi-sentence story, where the images should be realistic and keep global consistency across dynamic scenes and characters. Current works face the problem of semantic misalignment because of their fixed architecture and diversity of input modalities. To address this problem, we explore the semantic alignment between text and image representations by learning to match their semantic levels in the GAN-based generative model. More specifically, we introduce dynamic interactions according to learning to dynamically explore various semantic depths and fuse the different-modal information at a matched semantic level, which thus relieves the text-image semantic misalignment problem. Extensive experiments on different datasets demonstrate the improvements of our approach, neither using segmentation masks nor auxiliary captioning networks, on image quality and story consistency, compared with state-of-the-art methods.
null
null
10.18653/v1/2022.findings-emnlp.346
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,858
inproceedings
wadden-etal-2022-scifact
{S}ci{F}act-Open: Towards open-domain scientific claim verification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.347/
Wadden, David and Lo, Kyle and Kuehl, Bailey and Cohan, Arman and Beltagy, Iz and Wang, Lucy Lu and Hajishirzi, Hannaneh
Findings of the Association for Computational Linguistics: EMNLP 2022
4719--4734
While research on scientific claim verification has led to the development of powerful systems that appear to approach human performance, these approaches have yet to be tested in a realistic setting against large corpora of scientific literature. Moving to this open-domain evaluation setting, however, poses unique challenges; in particular, it is infeasible to exhaustively annotate all evidence documents. In this work, we present SciFact-Open, a new test collection designed to evaluate the performance of scientific claim verification systems on a corpus of 500K research abstracts. Drawing upon pooling techniques from information retrieval, we collect evidence for scientific claims by pooling and annotating the top predictions of four state-of-the-art scientific claim verification models. We find that systems developed on smaller corpora struggle to generalize to SciFact-Open, exhibiting performance drops of at least 15 F1. In addition, analysis of the evidence in SciFact-Open reveals interesting phenomena likely to appear when claim verification systems are deployed in practice, e.g., cases where the evidence supports only a special case of the claim. Our dataset is available at https://github.com/dwadden/scifact-open.
null
null
10.18653/v1/2022.findings-emnlp.347
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,859
inproceedings
chimoto-bassett-2022-comet
{COMET}-{QE} and Active Learning for Low-Resource Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.348/
Chimoto, Everlyn Asiko and Bassett, Bruce A.
Findings of the Association for Computational Linguistics: EMNLP 2022
4735--4740
Active learning aims to deliver maximum benefit when resources are scarce. We use COMET-QE, a reference-free evaluation metric, to select sentences for low-resource neural machine translation. Using Swahili, Kinyarwanda and Spanish for our experiments, we show that COMET-QE significantly outperforms two variants of Round Trip Translation Likelihood (RTTL) and random sentence selection by up to 5 BLEU points for 20k sentences selected by Active Learning on a 30k baseline. This suggests that COMET-QE is a powerful tool for sentence selection in the very low-resource limit.
null
null
10.18653/v1/2022.findings-emnlp.348
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,860
inproceedings
michalopoulos-etal-2022-medicalsum
{M}edical{S}um: A Guided Clinical Abstractive Summarization Model for Generating Medical Reports from Patient-Doctor Conversations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.349/
Michalopoulos, George and Williams, Kyle and Singh, Gagandeep and Lin, Thomas
Findings of the Association for Computational Linguistics: EMNLP 2022
4741--4749
We introduce MedicalSum, a transformer-based sequence-to-sequence architecture for summarizing medical conversations by integrating medical domain knowledge from the Unified Medical Language System (UMLS). The novel knowledge augmentation is performed in three ways: (i) introducing a guidance signal that consists of the medical words in the input sequence, (ii) leveraging semantic type knowledge in UMLS to create clinically meaningful input embeddings, and (iii) making use of a novel weighted loss function that provides a stronger incentive for the model to correctly predict words with a medical meaning. By applying these three strategies, MedicalSum takes clinical knowledge into consideration during the summarization process and achieves state-of-the-art ROUGE score improvements of 0.8-2.1 points (including 6.2{\%} ROUGE-1 error reduction in the PE section) when producing medical summaries of patient-doctor conversations.
null
null
10.18653/v1/2022.findings-emnlp.349
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,861
inproceedings
sosea-caragea-2022-leveraging
Leveraging Training Dynamics and Self-Training for Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.350/
Sosea, Tiberiu and Caragea, Cornelia
Findings of the Association for Computational Linguistics: EMNLP 2022
4750--4762
The effectiveness of pre-trained language models in downstream tasks is highly dependent on the amount of labeled data available for training. Semi-supervised learning (SSL) is a promising technique that has seen wide attention recently due to its effectiveness in improving deep learning models when training data is scarce. Common approaches employ a teacher-student self-training framework, where a teacher network generates pseudo-labels for unlabeled data, which are then used to iteratively train a student network. In this paper, we propose a new self-training approach for text classification that leverages training dynamics of unlabeled data. We evaluate our approach on a wide range of text classification tasks, including emotion detection, sentiment analysis, question classification and gramaticality, which span a variety of domains, e.g, Reddit, Twitter, and online forums. Notably, our method is successful on all benchmarks, obtaining an average increase in F1 score of 3.5{\%} over strong baselines in low resource settings.
null
null
10.18653/v1/2022.findings-emnlp.350
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,862
inproceedings
sadat-caragea-2022-learning
Learning to Infer from Unlabeled Data: A Semi-supervised Learning Approach for Robust Natural Language Inference
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.351/
Sadat, Mobashir and Caragea, Cornelia
Findings of the Association for Computational Linguistics: EMNLP 2022
4763--4776
Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) aims at predicting the relation between a pair of sentences (premise and hypothesis) as entailment, contradiction or semantic independence. Although deep learning models have shown promising performance for NLI in recent years, they rely on large scale expensive human-annotated datasets. Semi-supervised learning (SSL) is a popular technique for reducing the reliance on human annotation by leveraging unlabeled data for training. However, despite its substantial success on single sentence classification tasks where the challenge in making use of unlabeled data is to assign {\textquotedblleft}good enough{\textquotedblright} pseudo-labels, for NLI tasks, the nature of unlabeled data is more complex: one of the sentences in the pair (usually the hypothesis) along with the class label are missing from the data and require human annotations, which makes SSL for NLI more challenging. In this paper, we propose a novel way to incorporate unlabeled data in SSL for NLI where we use a conditional language model, BART to generate the hypotheses for the unlabeled sentences (used as premises). Our experiments show that our SSL framework successfully exploits unlabeled data and substantially improves the performance of four NLI datasets in low-resource settings. We release our code here: https://github.com/msadat3/SSL{\_}for{\_}NLI
null
null
10.18653/v1/2022.findings-emnlp.351
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,863
inproceedings
morris-etal-2022-unsupervised
Unsupervised Text Deidentification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.352/
Morris, John and Chiu, Justin and Zabih, Ramin and Rush, Alexander
Findings of the Association for Computational Linguistics: EMNLP 2022
4777--4788
Deidentification seeks to anonymize textual data prior to distribution. Automatic deidentification primarily uses supervised named entity recognition from human-labeled data points. We propose an unsupervised deidentification method that masks words that leak personally-identifying information. The approach utilizes a specially trained reidentification model to identify individuals from redacted personal documents. Motivated by K-anonymity based privacy, we generate redactions that ensure a minimum reidentification rank for the correct profile of the document. To evaluate this approach, we consider the task of deidentifying Wikipedia Biographies, and evaluate using an adversarial reidentification metric. Compared to a set of unsupervised baselines, our approach deidentifies documents more completely while removing fewer words. Qualitatively, we see that the approach eliminates many identifying aspects that would fall outside of the common named entity based approach.
null
null
10.18653/v1/2022.findings-emnlp.352
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,864
inproceedings
chaudhary-etal-2022-federated
Federated Continual Learning for Text Classification via Selective Inter-client Transfer
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.353/
Chaudhary, Yatin and Rai, Pranav and Schubert, Matthias and Sch{\"utze, Hinrich and Gupta, Pankaj
Findings of the Association for Computational Linguistics: EMNLP 2022
4789--4799
In this work, we combine the two paradigms: Federated Learning (FL) and Continual Learning (CL) for text classification task in cloud-edge continuum. The objective of Federated Continual Learning (FCL) is to improve deep learning models over life time at each client by (relevant and efficient) knowledge transfer without sharing data. Here, we address challenges in minimizing inter-client interference while knowledge sharing due to heterogeneous tasks across clients in FCL setup. In doing so, we propose a novel framework, Federated Selective Inter-client Transfer (FedSeIT) which selectively combines model parameters of foreign clients. To further maximize knowledge transfer, we assess domain overlap and select informative tasks from the sequence of historical tasks at each foreign client while preserving privacy. Evaluating against the baselines, we show improved performance, a gain of (average) 12.4{\%} in text classification over a sequence of tasks using five datasets from diverse domains. To the best of our knowledge, this is the first work that applies FCL to NLP.
null
null
10.18653/v1/2022.findings-emnlp.353
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,865
inproceedings
ma-etal-2022-dorothie
{DOROTHIE}: Spoken Dialogue for Handling Unexpected Situations in Interactive Autonomous Driving Agents
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.354/
Ma, Ziqiao and VanDerPloeg, Benjamin and Bara, Cristian-Paul and Huang, Yidong and Kim, Eui-In and Gervits, Felix and Marge, Matthew and Chai, Joyce
Findings of the Association for Computational Linguistics: EMNLP 2022
4800--4822
In the real world, autonomous driving agents navigate in highly dynamic environments full of unexpected situations where pre-trained models are unreliable. In these situations, what is immediately available to vehicles is often only human operators. Empowering autonomous driving agents with the ability to navigate in a continuous and dynamic environment and to communicate with humans through sensorimotor-grounded dialogue becomes critical. To this end, we introduce Dialogue On the ROad To Handle Irregular Events (DOROTHIE), a novel interactive simulation platform that enables the creation of unexpected situations on the fly to support empirical studies on situated communication with autonomous driving agents. Based on this platform, we created the Situated Dialogue Navigation (SDN), a navigation benchmark of 183 trials with a total of 8415 utterances, around 18.7 hours of control streams, and 2.9 hours of trimmed audio. SDN is developed to evaluate the agent`s ability to predict dialogue moves from humans as well as generate its own dialogue moves and physical navigation actions. We further developed a transformer-based baseline model for these SDN tasks. Our empirical results indicate that language guided-navigation in a highly dynamic environment is an extremely difficult task for end-to-end models. These results will provide insight towards future work on robust autonomous driving agents
null
null
10.18653/v1/2022.findings-emnlp.354
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,866
inproceedings
bertsch-etal-2022-said
He Said, She Said: Style Transfer for Shifting the Perspective of Dialogues
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.355/
Bertsch, Amanda and Neubig, Graham and Gormley, Matthew R.
Findings of the Association for Computational Linguistics: EMNLP 2022
4823--4840
In this work, we define a new style transfer task: perspective shift, which reframes a dialouge from informal first person to a formal third person rephrasing of the text. This task requires challenging coreference resolution, emotion attribution, and interpretation of informal text. We explore several baseline approaches and discuss further directions on this task when applied to short dialogues. As a sample application, we demonstrate that applying perspective shifting to a dialogue summarization dataset (SAMSum) substantially improves the zero-shot performance of extractive news summarization models on this data. Additionally, supervised extractive models perform better when trained on perspective shifted data than on the original dialogues. We release our code publicly.
null
null
10.18653/v1/2022.findings-emnlp.355
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,867
inproceedings
liu-etal-2022-dynamic-augmentation
Dynamic Augmentation Data Selection for Few-shot Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.findings-emnlp.356/
Liu, Guangliang and Jin, Lifeng and Yuan, Owen and Zhou, Jiayu
Findings of the Association for Computational Linguistics: EMNLP 2022
4841--4852
Data augmentation has been a popular method for fine-tuning pre-trained language models to increase model robustness and performance. With augmentation data coming from modifying gold train data (in-sample augmentation) or being harvested from general domain unlabeled data (out-of-sample augmentation), the quality of such data is the key to successful fine-tuning. In this paper, we propose a dynamic data selection method to select effective augmentation data from different augmentation sources according to the model`s learning stage, by identifying a set of augmentation samples that optimally facilitates the learning process of the most current model. The method firstly filters out augmentation samples with noisy pseudo labels through a curriculum learning strategy, then estimates the effectiveness of reserved augmentation data by its influence scores on the current model at every update, allowing the data selection process tightly tailored to model parameters. And the two-stage augmentation strategy considers in-sample augmentation and out-of-sample augmentation in different learning stages. Experiments with both kinds of augmentation data on a variety of sentence classification tasks show that our method outperforms strong baselines, proving the effectiveness of our method. Analysis confirms the dynamic nature of the data effectiveness and the importance of model learning stages in utilization of augmentation data.
null
null
10.18653/v1/2022.findings-emnlp.356
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
26,868