entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
he-etal-2022-mabel
{MABEL}: Attenuating Gender Bias using Textual Entailment Data
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.657/
He, Jacqueline and Xia, Mengzhou and Fellbaum, Christiane and Chen, Danqi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9681--9702
Pre-trained language models encode undesirable social biases, which are further exacerbated in downstream use. To this end, we propose MABEL (a Method for Attenuating Gender Bias using Entailment Labels), an intermediate pre-training approach for mitigating gender bias in contextualized representations. Key to our approach is the use of a contrastive learning objective on counterfactually augmented, gender-balanced entailment pairs from natural language inference (NLI) datasets. We also introduce an alignment regularizer that pulls identical entailment pairs along opposite gender directions closer. We extensively evaluate our approach on intrinsic and extrinsic metrics, and show that MABEL outperforms previous task-agnostic debiasing approaches in terms of fairness. It also preserves task performance after fine-tuning on downstream tasks. Together, these findings demonstrate the suitability of NLI data as an effective means of bias mitigation, as opposed to only using unlabeled sentences in the literature. Finally, we identify that existing approaches often use evaluation settings that are insufficient or inconsistent. We make an effort to reproduce and compare previous methods, and call for unifying the evaluation settings across gender debiasing methods for better future comparison.
null
null
10.18653/v1/2022.emnlp-main.657
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,787
inproceedings
richardson-etal-2022-breakpoint
Breakpoint Transformers for Modeling and Tracking Intermediate Beliefs
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.658/
Richardson, Kyle and Tamari, Ronen and Sultan, Oren and Shahaf, Dafna and Tsarfaty, Reut and Sabharwal, Ashish
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9703--9719
Can we teach models designed for language understanding tasks to track and improve their beliefs through intermediate points in text? Besides making their inner workings more transparent, this would also help make models more reliable and consistent. To this end, we propose a representation learning framework called breakpoint modeling that allows for efficient and robust learning of this type. Given any text encoder and data marked with intermediate states (breakpoints) along with corresponding textual queries viewed as true/false propositions (i.e., the candidate intermediate beliefs of a model), our approach trains models in an efficient and end-to-end fashion to build intermediate representations that facilitate direct querying and training of beliefs at arbitrary points in text, alongside solving other end-tasks. We evaluate breakpoint modeling on a diverse set of NLU tasks including relation reasoning on Cluttr and narrative understanding on bAbI. Using novel proposition prediction tasks alongside these end-tasks, we show the benefit of our T5-based breakpoint transformer over strong conventional representation learning approaches in terms of processing efficiency, belief accuracy, and belief consistency, all with minimal to no degradation on the end-task. To show the feasibility of incorporating our belief tracker into more complex reasoning pipelines, we also obtain state-of-the-art performance on the three-tiered reasoning challenge for the recent TRIP benchmark (23-32{\%} absolute improvement on Tasks 2-3).
null
null
10.18653/v1/2022.emnlp-main.658
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,788
inproceedings
qiu-etal-2022-late
Late Fusion with Triplet Margin Objective for Multimodal Ideology Prediction and Analysis
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.659/
Qiu, Changyuan and Wu, Winston and Zhang, Xinliang Frederick and Wang, Lu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9720--9736
Prior work on ideology prediction has largely focused on single modalities, i.e., text or images. In this work, we introduce the task of multimodal ideology prediction, where a model predicts binary or five-point scale ideological leanings, given a text-image pair with political content. We first collect five new large-scale datasets with English documents and images along with their ideological leanings, covering news articles from a wide range of mainstream media in US and social media posts from Reddit and Twitter. We conduct in-depth analyses on news articles and reveal differences in image content and usage across the political spectrum. Furthermore, we perform extensive experiments and ablation studies, demonstrating the effectiveness of targeted pretraining objectives on different model components. Our best-performing model, a late-fusion architecture pretrained with a triplet objective over multimodal content, outperforms the state-of-the-art text-only model by almost 4{\%} and a strong multimodal baseline with no pretraining by over 3{\%}.
null
null
10.18653/v1/2022.emnlp-main.659
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,789
inproceedings
mekala-etal-2022-leveraging
Leveraging {QA} Datasets to Improve Generative Data Augmentation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.660/
Mekala, Dheeraj and Vu, Tu and Schick, Timo and Shang, Jingbo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9737--9750
The ability of generative language models (GLMs) to generate text has improved considerably in the last few years, enabling their use for generative data augmentation. In this work, we propose CONDA, an approach to further improve GLM`s ability to generate synthetic data by reformulating data generation as context generation for a given question-answer (QA) pair and leveraging QA datasets for training context generators. Then, we cast downstream tasks into the same question answering format and adapt the fine-tuned context generators to the target task domain. Finally, we use the fine-tuned GLM to generate relevant contexts, which are in turn used as synthetic training data for their corresponding tasks. We perform extensive experiments on multiple classification datasets and demonstrate substantial improvements in performance for both few- and zero-shot settings. Our analysis reveals that QA datasets that require high-level reasoning abilities (e.g., abstractive and common-sense QA datasets) tend to give the best boost in performance in both few-shot and zero-shot settings.
null
null
10.18653/v1/2022.emnlp-main.660
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,790
inproceedings
clark-etal-2022-meta
Meta-Learning Fast Weight Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.661/
Clark, Kevin and Guu, Kelvin and Chang, Ming-Wei and Pasupat, Panupong and Hinton, Geoffrey and Norouzi, Mohammad
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9751--9757
Dynamic evaluation of language models (LMs) adapts model parameters at test time using gradient information from previous tokens and substantially improves LM performance. However, it requires over 3x more compute than standard inference. We present Fast Weight Layers (FWLs), a neural component that provides the benefits of dynamic evaluation much more efficiently by expressing gradient updates as linear attention. A key improvement over dynamic evaluation is that FWLs can also be applied at training time, so the model learns to make good use of gradient updates. FWLs can easily be added on top of existing transformer models, require relatively little extra compute or memory to run, and significantly improve language modeling perplexity.
null
null
10.18653/v1/2022.emnlp-main.661
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,791
inproceedings
csordas-etal-2022-ctl
{CTL}++: Evaluating Generalization on Never-Seen Compositional Patterns of Known Functions, and Compatibility of Neural Representations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.662/
Csord{\'a}s, R{\'o}bert and Irie, Kazuki and Schmidhuber, Juergen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9758--9767
Well-designed diagnostic tasks have played a key role in studying the failure of neural nets (NNs) to generalize systematically. Famous examples include SCAN and Compositional Table Lookup (CTL). Here we introduce CTL++, a new diagnostic dataset based on compositions of unary symbolic functions. While the original CTL is used to test length generalization or productivity, CTL++ is designed to test systematicity of NNs, that is, their capability to generalize to unseen compositions of known functions. CTL++ splits functions into groups and tests performance on group elements composed in a way not seen during training. We show that recent CTL-solving Transformer variants fail on CTL++. The simplicity of the task design allows for fine-grained control of task difficulty, as well as many insightful analyses. For example, we measure how much overlap between groups is needed by tested NNs for learning to compose. We also visualize how learned symbol representations in outputs of functions from different groups are compatible in case of success but not in case of failure. These results provide insights into failure cases reported on more complex compositions in the natural language domain. Our code is public.
null
null
10.18653/v1/2022.emnlp-main.662
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,792
inproceedings
cao-etal-2022-learning
Learning with Rejection for Abstractive Text Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.663/
Cao, Meng and Dong, Yue and He, Jingyi and Cheung, Jackie Chi Kit
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9768--9780
State-of-the-art abstractive summarization systems frequently hallucinate content that is not supported by the source document, mainly due to noise in the training dataset.Existing methods opt to drop the noisy samples or tokens from the training set entirely, reducing the effective training set size and creating an artificial propensity to copy words from the source. In this work, we propose a training objective for abstractive summarization based on rejection learning, in which the model learns whether or not to reject potentially noisy tokens. We further propose a regularized decoding objective that penalizes non-factual candidate summaries during inference by using the rejection probability learned during training.We show that our method considerably improves the factuality of generated summaries in automatic and human evaluations when compared to five baseline models, and that it does so while increasing the abstractiveness of the generated summaries.
null
null
10.18653/v1/2022.emnlp-main.663
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,793
inproceedings
lee-etal-2022-adaptive
Adaptive Label Smoothing with Self-Knowledge in Natural Language Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.664/
Lee, Dongkyu and Cheung, Ka Chun and Zhang, Nevin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9781--9792
Overconfidence has been shown to impair generalization and calibration of a neural network. Previous studies remedy this issue by adding a regularization term to a loss function, preventing a model from making a peaked distribution. Label smoothing smoothes target labels with a pre-defined prior label distribution; as a result, a model is learned to maximize the likelihood of predicting the soft label. Nonetheless, the amount of smoothing is the same in all samples and remains fixed in training. In other words, label smoothing does not reflect the change in probability distribution mapped by a model over the course of training. To address this issue, we propose a regularization scheme that brings dynamic nature into the smoothing parameter by taking model probability distribution into account, thereby varying the parameter per instance. A model in training self-regulates the extent of smoothing on the fly during forward propagation. Furthermore, inspired by recent work in bridging label smoothing and knowledge distillation, our work utilizes self-knowledge as a prior label distribution in softening target labels, and presents theoretical support for the regularization effect by knowledge distillation and the dynamic smoothing parameter. Our regularizer is validated comprehensively, and the result illustrates marked improvements in model generalization and calibration, enhancing robustness and trustworthiness of a model.
null
null
10.18653/v1/2022.emnlp-main.664
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,794
inproceedings
lee-etal-2022-hard
Hard Gate Knowledge Distillation - Leverage Calibration for Robust and Reliable Language Model
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.665/
Lee, Dongkyu and Tian, Zhiliang and Zhao, Yingxiu and Cheung, Ka Chun and Zhang, Nevin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9793--9803
In knowledge distillation, a student model is trained with supervisions from both knowledge from a teacher and observations drawn from a training data distribution. Knowledge of a teacher is considered a subject that holds inter-class relations which send a meaningful supervision to a student; hence, much effort has been put to find such knowledge to be distilled. In this paper, we explore a question that has been given little attention: {\textquotedblleft}when to distill such knowledge.{\textquotedblright} The question is answered in our work with the concept of model calibration; we view a teacher model not only as a source of knowledge but also as a gauge to detect miscalibration of a student. This simple and yet novel view leads to a hard gate knowledge distillation scheme that switches between learning from a teacher model and training data. We verify the gating mechanism in the context of natural language generation at both the token-level and the sentence-level. Empirical comparisons with strong baselines show that hard gate knowledge distillation not only improves model generalization, but also significantly lowers model calibration error.
null
null
10.18653/v1/2022.emnlp-main.665
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,795
inproceedings
joshi-etal-2022-spurious
Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.666/
Joshi, Nitish and Pan, Xiang and He, He
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9804--9817
The term {\textquoteleft}spurious correlations' has been used in NLP to informally denote any undesirable feature-label correlations. However, a correlation can be undesirable because (i) the feature is irrelevant to the label (e.g. punctuation in a review), or (ii) the feature`s effect on the label depends on the context (e.g. negation words in a review), which is ubiquitous in language tasks. In case (i), we want the model to be invariant to the feature, which is neither necessary nor sufficient for prediction. But in case (ii), even an ideal model (e.g. humans) must rely on the feature, since it is necessary (but not sufficient) for prediction. Therefore, a more fine-grained treatment of spurious features is needed to specify the desired model behavior. We formalize this distinction using a causal model and probabilities of necessity and sufficiency, which delineates the causal relations between a feature and a label. We then show that this distinction helps explain results of existing debiasing methods on different spurious features, and demystifies surprising results such as the encoding of spurious features in model representations after debiasing.
null
null
10.18653/v1/2022.emnlp-main.666
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,796
inproceedings
balachandran-etal-2022-correcting
Correcting Diverse Factual Errors in Abstractive Summarization via Post-Editing and Language Model Infilling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.667/
Balachandran, Vidhisha and Hajishirzi, Hannaneh and Cohen, William and Tsvetkov, Yulia
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9818--9830
Abstractive summarization models often generate inconsistent summaries containing factual errors or hallucinated content. Recent works focus on correcting factual errors in generated summaries via post-editing. Such correction models are trained using adversarial non-factual summaries constructed using heuristic rules for injecting errors. However, generating non-factual summaries using heuristics often does not generalize well to actual model errors. In this work, we propose to generate hard, representative synthetic examples of non-factual summaries through infilling language models. With this data, we train a more robust fact-correction model to post-edit the summaries to improve factual consistency. Through quantitative and qualitative experiments on two popular summarization datasets{---} CNN/DM and XSum{---}we show that our approach vastly outperforms prior methods in correcting erroneous summaries. Our model{---}FactEdit{---}improves factuality scores by over {\textasciitilde}11 points on CNN/DM and over {\textasciitilde}31 points on XSum on average across multiple summarization models, producing more factual summaries while maintaining competitive summarization quality.
null
null
10.18653/v1/2022.emnlp-main.667
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,797
inproceedings
akash-etal-2022-coordinated
Coordinated Topic Modeling
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.668/
Akash, Pritom Saha and Huang, Jie and Chang, Kevin Chen-Chuan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9831--9843
We propose a new problem called coordinated topic modeling that imitates human behavior while describing a text corpus. It considers a set of well-defined topics like the axes of a semantic space with a reference representation. It then uses the axes to model a corpus for easily understandable representation. This new task helps represent a corpus more interpretably by reusing existing knowledge and benefits the corpora comparison task. We design ECTM, an embedding-based coordinated topic model that effectively uses the reference representation to capture the target corpus-specific aspects while maintaining each topic`s global semantics. In ECTM, we introduce the topic- and document-level supervision with a self-training mechanism to solve the problem. Finally, extensive experiments on multiple domains show the superiority of our model over other baselines.
null
null
10.18653/v1/2022.emnlp-main.668
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,798
inproceedings
ni-etal-2022-large
Large Dual Encoders Are Generalizable Retrievers
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.669/
Ni, Jianmo and Qu, Chen and Lu, Jing and Dai, Zhuyun and Hernandez Abrego, Gustavo and Ma, Ji and Zhao, Vincent and Luan, Yi and Hall, Keith and Chang, Ming-Wei and Yang, Yinfei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9844--9855
It has been shown that dual encoders trained on one domain often fail to generalize to other domains for retrieval tasks. One widespread belief is that the bottleneck layer of a dual encoder, where the final score is simply a dot-product between a query vector and a passage vector, is too limited compared to models with fine-grained interactions between the query and the passage. In this paper, we challenge this belief by scaling up the size of the dual encoder model \textit{while keeping the bottleneck layer as a single dot-product with a fixed size.} With multi-stage training, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization. We further analyze the impact of the bottleneck layer and demonstrate diminishing improvement when scaling up the embedding size. Experimental results show that our dual encoders, \textbf{G}eneralizable \textbf{T}5-based dense \textbf{R}etrievers (GTR), outperform previous sparse and dense retrievers on the BEIR dataset significantly. Most surprisingly, our ablation study finds that GTR is very data efficient, as it only needs 10{\%} of MS Marco supervised data to match the out-of-domain performance of using all supervised data.
null
null
10.18653/v1/2022.emnlp-main.669
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,799
inproceedings
patel-etal-2022-cripp
{CRIPP}-{VQA}: Counterfactual Reasoning about Implicit Physical Properties via Video Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.670/
Patel, Maitreya and Gokhale, Tejas and Baral, Chitta and Yang, Yezhou
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9856--9870
Videos often capture objects, their visible properties, their motion, and the interactions between different objects. Objects also have physical properties such as mass, which the imaging pipeline is unable to directly capture. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. In this paper, we introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene. CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning about the effect of actions, questions about planning in order to reach a goal, and descriptive questions about visible properties of objects. The CRIPP-VQA test set enables evaluation under several out-of-distribution settings {--} videos with objects with masses, coefficients of friction, and initial velocities that are not observed in the training distribution. Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this paper) and explicit properties of objects (the focus of prior work).
null
null
10.18653/v1/2022.emnlp-main.670
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,800
inproceedings
wang-etal-2022-entity
Entity-centered Cross-document Relation Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.671/
Wang, Fengqi and Li, Fei and Fei, Hao and Li, Jingye and Wu, Shengqiong and Su, Fangfang and Shi, Wenxuan and Ji, Donghong and Cai, Bo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9871--9881
Relation Extraction (RE) is a fundamental task of information extraction, which has attracted a large amount of research attention. Previous studies focus on extracting the relations within a sentence or document, while currently researchers begin to explore cross-document RE. However, current cross-document RE methods directly utilize text snippets surrounding target entities in multiple given documents, which brings considerable noisy and non-relevant sentences. Moreover, they utilize all the text paths in a document bag in a coarse-grained way, without considering the connections between these text paths.In this paper, we aim to address both of these shortages and push the state-of-the-art for cross-document RE. First, we focus on input construction for our RE model and propose an entity-based document-context filter to retain useful information in the given documents by using the bridge entities in the text paths. Second, we propose a cross-document RE model based on cross-path entity relation attention, which allow the entity relations across text paths to interact with each other. We compare our cross-document RE method with the state-of-the-art methods in the dataset CodRED. Our method outperforms them by at least 10{\%} in F1, thus demonstrating its effectiveness.
null
null
10.18653/v1/2022.emnlp-main.671
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,801
inproceedings
thai-etal-2022-exploring
Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.672/
Thai, Katherine and Karpinska, Marzena and Krishna, Kalpesh and Ray, Bill and Inghilleri, Moira and Wieting, John and Iyyer, Mohit
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9882--9902
Literary translation is a culturally significant task, but it is bottlenecked by the small number of qualified literary translators relative to the many untranslated works published around the world. Machine translation (MT) holds potential to complement the work of human translators by improving both training procedures and their overall efficiency. Literary translation is less constrained than more traditional MT settings since translators must balance meaning equivalence, readability, and critical interpretability in the target language. This property, along with the complex discourse-level context present in literary texts, also makes literary MT more challenging to computationally model and evaluate. To explore this task, we collect a dataset (Par3) of non-English language novels in the public domain, each aligned at the paragraph level to both human and automatic English translations. Using Par3, we discover that expert literary translators prefer reference human translations over machine-translated paragraphs at a rate of 84{\%}, while state-of-the-art automatic MT metrics do not correlate with those preferences. The experts note that MT outputs contain not only mistranslations, but also discourse-disrupting errors and stylistic inconsistencies. To address these problems, we train a post-editing model whose output is preferred over normal MT output at a rate of 69{\%} by experts. We publicly release Par3 to spur future research into literary MT.
null
null
10.18653/v1/2022.emnlp-main.672
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,802
inproceedings
liang-etal-2022-label
Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.673/
Liang, Shining and Shou, Linjun and Pei, Jian and Gong, Ming and Zuo, Wanli and Zuo, Xianglin and Jiang, Daxin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9903--9918
Despite the great success of spoken language understanding (SLU) in high-resource languages, it remains challenging in low-resource languages mainly due to the lack of labeled training data. The recent multilingual code-switching approach achieves better alignments of model representations across languages by constructing a mixed-language context in zero-shot cross-lingual SLU. However, current code-switching methods are limited to implicit alignment and disregard the inherent semantic structure in SLU, i.e., the hierarchical inclusion of utterances, slots and words. In this paper, we propose to model the utterance-slot-word structure by a multi-level contrastive learning framework at the utterance, slot and word levels to facilitate explicit alignment. Novel code-switching schemes are introduced to generate hard negative examples for our contrastive learning framework. Furthermore, we develop a label-aware joint model leveraging label semantics to enhance the implicit alignment and feed to contrastive learning. Our experimental results show that our proposed methods significantly improve the performance compared with the strong baselines on two zero-shot cross-lingual SLU benchmark datasets.
null
null
10.18653/v1/2022.emnlp-main.673
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,803
inproceedings
fu-etal-2022-polyglot
Polyglot Prompt: Multilingual Multitask Prompt Training
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.674/
Fu, Jinlan and Ng, See-Kiong and Liu, Pengfei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9919--9935
This paper aims for a potential architectural improvement for multilingual learning and asks: Can different tasks from different languages be modeled in a monolithic framework, i.e. without any task/language-specific module? The benefit of achieving this could open new doors for future multilingual research, including allowing systems trained on low resources to be further assisted by other languages as well as other tasks. We approach this goal by developing a learning framework named Polyglot Prompting to exploit prompting methods for learning a unified semantic space for different languages and tasks with multilingual prompt engineering. We performed a comprehensive evaluation of 6 tasks, namely topic classification, sentiment classification, named entity recognition, question answering, natural language inference, and summarization, covering 24 datasets and 49 languages. The experimental results demonstrated the efficacy of multilingual multitask prompt-based learning and led to inspiring observations. We also present an interpretable multilingual evaluation methodology and show how the proposed framework, multilingual multitask prompt training, works. We release all datasets prompted in the best setting and code.
null
null
10.18653/v1/2022.emnlp-main.674
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,804
inproceedings
gatti-etal-2022-vistot
{V}is{T}o{T}: Vision-Augmented Table-to-Text Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.675/
Gatti, Prajwal and Mishra, Anand and Gupta, Manish and Das Gupta, Mithun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9936--9949
Table-to-text generation has been widely studied in the Natural Language Processing community in the recent years. We give a new perspective to this problem by incorporating signals from both tables as well as associated images to generate relevant text. While tables contain a structured list of facts, images are a rich source of unstructured visual information. For example, in the tourism domain, images can be used to infer knowledge such as the type of landmark (e.g., church), its architecture (e.g., Ancient Roman), and composition (e.g., white marble). Therefore, in this paper, we introduce the novel task of Vision-augmented Table-To-Text Generation (VisToT, defined as follows: given a table and an associated image, produce a descriptive sentence conditioned on the multimodal input. For the task, we present a novel multimodal table-to-text dataset, WikiLandmarks, covering 73,084 unique world landmarks. Further, we also present a competitive architecture, namely, VT3 that generates accurate sentences conditioned on the image and table pairs. Through extensive analyses and experiments, we show that visual cues from images are helpful in (i) inferring missing information from incomplete or sparse tables, and (ii) strengthening the importance of useful information from noisy tables for natural language generation. We make the code and data publicly available.
null
null
10.18653/v1/2022.emnlp-main.675
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,805
inproceedings
zhang-etal-2022-generative
Generative Entity-to-Entity Stance Detection with Knowledge Graph Augmentation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.676/
Zhang, Xinliang Frederick and Beauchamp, Nick and Wang, Lu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9950--9969
Stance detection is typically framed as predicting the sentiment in a given text towards a target entity. However, this setup overlooks the importance of the source entity, i.e., who is expressing the opinion. In this paper, we emphasize the imperative need for studying interactions among entities when inferring stances. We first introduce a new task, entity-to-entity (E2E) stance detection, which primes models to identify entities in their canonical names and discern stances jointly. To support this study, we curate a new dataset with 10,641 annotations labeled at the sentence level from news articles of different ideological leanings. We present a novel generative framework to allow the generation of canonical names for entities as well as stances among them. We further enhance the model with a graph encoder to summarize entity activities and external knowledge surrounding the entities. Experiments show that our model outperforms strong comparisons by large margins. Further analyses demonstrate the usefulness of E2E stance detection for understanding media quotation and stance landscape as well as inferring entity ideology.
null
null
10.18653/v1/2022.emnlp-main.676
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,806
inproceedings
zhang-etal-2022-symptom
Symptom Identification for Interpretable Detection of Multiple Mental Disorders on Social Media
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.677/
Zhang, Zhiling and Chen, Siyuan and Wu, Mengyue and Zhu, Kenny
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9970--9985
Mental disease detection (MDD) from social media has suffered from poor generalizability and interpretability, due to lack of symptom modeling. This paper introduces PsySym, the first annotated symptom identification corpus of multiple psychiatric disorders, to facilitate further research progress. PsySym is annotated according to a knowledge graph of the 38 symptom classes related to 7 mental diseases complied from established clinical manuals and scales, and a novel annotation framework for diversity and quality. Experiments show that symptom-assisted MDD enabled by PsySym can outperform strong pure-text baselines. We also exhibit the convincing MDD explanations provided by symptom predictions with case studies, and point to their further potential applications.
null
null
10.18653/v1/2022.emnlp-main.677
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,807
inproceedings
kim-etal-2022-improving
Improving Iterative Text Revision by Learning Where to Edit from Other Revision Tasks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.678/
Kim, Zae Myung and Du, Wanyu and Raheja, Vipul and Kumar, Dhruv and Kang, Dongyeop
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
9986--9999
Iterative text revision improves text quality by fixing grammatical errors, rephrasing for better readability or contextual appropriateness, or reorganizing sentence structures throughout a document.Most recent research has focused on understanding and classifying different types of edits in the iterative revision process from human-written text instead of building accurate and robust systems for iterative text revision.In this work, we aim to build an end-to-end text revision system that can iteratively generate helpful edits by explicitly detecting editable spans (where-to-edit) with their corresponding edit intents and then instructing a revision model to revise the detected edit spans.Leveraging datasets from other related text editing NLP tasks, combined with the specification of editable spans, leads our system to more accurately model the process of iterative text refinement, as evidenced by empirical results and human evaluations.Our system significantly outperforms previous baselines on our text revision tasks and other standard text revision tasks, including grammatical error correction, text simplification, sentence fusion, and style transfer.Through extensive qualitative and quantitative analysis, we make vital connections between edit intentions and writing quality, and better computational modeling of iterative text revisions.
null
null
10.18653/v1/2022.emnlp-main.678
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,808
inproceedings
wu-etal-2022-conqrr
{CONQRR}: Conversational Query Rewriting for Retrieval with Reinforcement Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.679/
Wu, Zeqiu and Luan, Yi and Rashkin, Hannah and Reitter, David and Hajishirzi, Hannaneh and Ostendorf, Mari and Tomar, Gaurav Singh
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10000--10014
Compared to standard retrieval tasks, passage retrieval for conversational question answering (CQA) poses new challenges in understanding the current user question, as each question needs to be interpreted within the dialogue context. Moreover, it can be expensive to re-train well-established retrievers such as search engines that are originally developed for non-conversational queries. To facilitate their use, we develop a query rewriting model CONQRR that rewrites a conversational question in the context into a standalone question. It is trained with a novel reward function to directly optimize towards retrieval using reinforcement learning and can be adapted to any off-the-shelf retriever. CONQRR achieves state-of-the-art results on a recent open-domain CQA dataset containing conversations from three different sources, and is effective for two different off-the-shelf retrievers. Our extensive analysis also shows the robustness of CONQRR to out-of-domain dialogues as well as to zero query rewriting supervision.
null
null
10.18653/v1/2022.emnlp-main.679
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,809
inproceedings
lee-etal-2022-specializing
Specializing Multi-domain {NMT} via Penalizing Low Mutual Information
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.680/
Lee, Jiyoung and Kim, Hantae and Cho, Hyunchang and Choi, Edward and Park, Cheonbok
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10015--10026
Multi-domain Neural Machine Translation (NMT) trains a single model with multiple domains. It is appealing because of its efficacy in handling multiple domains within one model. An ideal multi-domain NMT learns distinctive domain characteristics simultaneously, however, grasping the domain peculiarity is a non-trivial task. In this paper, we investigate domain-specific information through the lens of mutual information (MI) and propose a new objective that penalizes low MI to become higher.Our method achieved the state-of-the-art performance among the current competitive multi-domain NMT models. Also, we show our objective promotes low MI to be higher resulting in domain-specialized multi-domain NMT.
null
null
10.18653/v1/2022.emnlp-main.680
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,810
inproceedings
shi-etal-2022-simple
A Simple Contrastive Learning Framework for Interactive Argument Pair Identification via Argument-Context Extraction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.681/
Shi, Lida and Giunchiglia, Fausto and Song, Rui and Shi, Daqian and Liu, Tongtong and Diao, Xiaolei and Xu, Hao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10027--10039
Interactive argument pair identification is an emerging research task for argument mining, aiming to identify whether two arguments are interactively related. It is pointed out that the context of the argument is essential to improve identification performance. However, current context-based methods achieve limited improvements since the entire context typically contains much irrelevant information. In this paper, we propose a simple contrastive learning framework to solve this problem by extracting valuable information from the context. This framework can construct hard argument-context samples and obtain a robust and uniform representation by introducing contrastive learning. We also propose an argument-context extraction module to enhance information extraction by discarding irrelevant blocks. The experimental results show that our method achieves the state-of-the-art performance on the benchmark dataset. Further analysis demonstrates the effectiveness of our proposed modules and visually displays more compact semantic representations.
null
null
10.18653/v1/2022.emnlp-main.681
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,811
inproceedings
lei-etal-2022-sentence
Sentence-level Media Bias Analysis Informed by Discourse Structures
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.682/
Lei, Yuanyuan and Huang, Ruihong and Wang, Lu and Beauchamp, Nick
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10040--10050
As polarization continues to rise among both the public and the news media, increasing attention has been devoted to detecting media bias. Most recent work in the NLP community, however, identify bias at the level of individual articles. However, each article itself comprises multiple sentences, which vary in their ideological bias. In this paper, we aim to identify sentences within an article that can illuminate and explain the overall bias of the entire article. We show that understanding the discourse role of a sentence in telling a news story, as well as its relation with nearby sentences, can reveal the ideological leanings of an author even when the sentence itself appears merely neutral. In particular, we consider using a functional news discourse structure and PDTB discourse relations to inform bias sentence identification, and distill the auxiliary knowledge from the two types of discourse structure into our bias sentence identification system. Experimental results on benchmark datasets show that incorporating both the global functional discourse structure and local rhetorical discourse relations can effectively increase the recall of bias sentence identification by 8.27{\%} - 8.62{\%}, as well as increase the precision by 2.82{\%} - 3.48{\%}.
null
null
10.18653/v1/2022.emnlp-main.682
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,812
inproceedings
zhao-etal-2022-towards
Towards Efficient Dialogue Pre-training with Transferable and Interpretable Latent Structure
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.683/
Zhao, Xueliang and Liu, Lemao and Fu, Tingchen and Shi, Shuming and Zhao, Dongyan and Yan, Rui
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10051--10063
With the availability of massive general-domain dialogue data, pre-trained dialogue generation appears to be super appealing to transfer knowledge from the general domain to downstream applications. In most existing work, such transferable ability is mainly obtained by fitting a large model with hundreds of millions of parameters on massive data in an exhaustive way, leading to inefficient running and poor interpretability. This paper proposes a novel dialogue generation model with a latent structure that is easily transferable from the general domain to downstream tasks in a lightweight and transparent way. Experiments on two benchmarks validate the effectiveness of the proposed model. Thanks to the transferable latent structure, our model is able to yield better dialogue responses than four strong baselines in terms of both automatic and human evaluations, and our model with about 22{\%} parameters particularly delivers a 5x speedup in running time compared with the strongest baseline. Moreover, the proposed model is explainable by interpreting the discrete latent variables.
null
null
10.18653/v1/2022.emnlp-main.683
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,813
inproceedings
yu-etal-2022-empirical
An Empirical Revisiting of Linguistic Knowledge Fusion in Language Understanding Tasks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.684/
Yu, Changlong and Xiao, Tianyi and Kong, Lingpeng and Song, Yangqiu and Ng, Wilfred
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10064--10070
Though linguistic knowledge emerges during large-scale language model pretraining, recent work attempt to explicitly incorporate human-defined linguistic priors into task-specific fine-tuning. Infusing language models with syntactic or semantic knowledge from parsers has shown improvements on many language understanding tasks. To further investigate the effectiveness of structural linguistic priors, we conduct empirical study of replacing parsed graphs or trees with trivial ones (rarely carrying linguistic knowledge e.g., balanced tree) for tasks in the GLUE benchmark. Encoding with trivial graphs achieves competitive or even better performance in fully-supervised and few-shot settings. It reveals that the gains might not be significantly attributed to explicit linguistic priors but rather to more feature interactions brought by fusion layers. Hence we call for attention to using trivial graphs as necessary baselines to design advanced knowledge fusion methods in the future.
null
null
10.18653/v1/2022.emnlp-main.684
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,814
inproceedings
zeng-lu-2022-unsupervised
Unsupervised Non-transferable Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.685/
Zeng, Guangtao and Lu, Wei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10071--10084
Training a good deep learning model requires substantial data and computing resources, which makes the resulting neural model a valuable intellectual property. To prevent the neural network from being undesirably exploited, non-transferable learning has been proposed to reduce the model generalization ability in specific target domains. However, existing approaches require labeled data for the target domain which can be difficult to obtain. Furthermore, they do not have the mechanism to still recover the model`s ability to access the target domain.In this paper, we propose a novel unsupervised non-transferable learning method for the text classification task that does not require annotated target domain data. We further introduce a secret key component in our approach for recovering the access to the target domain, where we design both an explicit and an implicit method for doing so. Extensive experiments demonstrate the effectiveness of our approach.
null
null
10.18653/v1/2022.emnlp-main.685
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,815
inproceedings
nguyen-etal-2022-adaptive
Adaptive Contrastive Learning on Multimodal Transformer for Review Helpfulness Prediction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.686/
Nguyen, Thong and Wu, Xiaobao and Luu, Anh Tuan and Hai, Zhen and Bing, Lidong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10085--10096
Modern Review Helpfulness Prediction systems are dependent upon multiple modalities, typically texts and images. Unfortunately, those contemporary approaches pay scarce attention to polish representations of cross-modal relations and tend to suffer from inferior optimization. This might cause harm to model`s predictions in numerous cases. To overcome the aforementioned issues, we propose Multi-modal Contrastive Learning for Multimodal Review Helpfulness Prediction (MRHP) problem, concentrating on mutual information between input modalities to explicitly elaborate cross-modal relations. In addition, we introduce Adaptive Weighting scheme for our contrastive learning approach in order to increase flexibility in optimization. Lastly, we propose Multimodal Interaction module to address the unalignment nature of multimodal data, thereby assisting the model in producing more reasonable multimodal representations. Experimental results show that our method outperforms prior baselines and achieves state-of-the-art results on two publicly available benchmark datasets for MRHP problem.
null
null
10.18653/v1/2022.emnlp-main.686
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,816
inproceedings
liu-etal-2022-adaptive
Adaptive Token-level Cross-lingual Feature Mixing for Multilingual Neural Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.687/
Liu, Junpeng and Huang, Kaiyu and Li, Jiuyi and Liu, Huan and Su, Jinsong and Huang, Degen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10097--10113
Multilingual neural machine translation aims to translate multiple language pairs in a single model and has shown great success thanks to the knowledge transfer across languages with the shared parameters. Despite promising, this share-all paradigm suffers from insufficient ability to capture language-specific features. Currently, the common practice is to insert or search language-specific networks to balance the shared and specific features. However, those two types of features are not sufficient enough to model the complex commonality and divergence across languages, such as the locally shared features among similar languages, which leads to sub-optimal transfer, especially in massively multilingual translation. In this paper, we propose a novel token-level feature mixing method that enables the model to capture different features and dynamically determine the feature sharing across languages. Based on the observation that the tokens in the multilingual model are usually shared by different languages, we we insert a feature mixing layer into each Transformer sublayer and model each token representation as a mix of different features, with a proportion indicating its feature preference. In this way, we can perform fine-grained feature sharing and achieve better multilingual transfer. Experimental results on multilingual datasets show that our method outperforms various strong baselines and can be extended to zero-shot translation. Further analyses reveal that our method can capture different linguistic features and bridge the representation gap across languages.
null
null
10.18653/v1/2022.emnlp-main.687
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,817
inproceedings
chia-etal-2022-dataset
A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.688/
Chia, Yew Ken and Bing, Lidong and Aljunied, Sharifah Mahani and Si, Luo and Poria, Soujanya
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10114--10133
Relation extraction has the potential for large-scale knowledge graph construction, but current methods do not consider the qualifier attributes for each relation triplet, such as time, quantity or location. The qualifiers form hyper-relational facts which better capture the rich and complex knowledge graph structure. For example, the relation triplet (Leonard Parker, Educated At, Harvard University) can be factually enriched by including the qualifier (End Time, 1967). Hence, we propose the task of hyper-relational extraction to extract more specific and complete facts from text. To support the task, we construct HyperRED, a large-scale and general-purpose dataset. Existing models cannot perform hyper-relational extraction as it requires a model to consider the interaction between three entities. Hence, we propose CubeRE, a cube-filling model inspired by table-filling approaches and explicitly considers the interaction between relation triplets and qualifiers. To improve model scalability and reduce negative class imbalance, we further propose a cube-pruning method. Our experiments show that CubeRE outperforms strong baselines and reveal possible directions for future research. Our code and data are available at github.com/declare-lab/HyperRED.
null
null
10.18653/v1/2022.emnlp-main.688
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,818
inproceedings
yang-etal-2022-low
Low-resource Neural Machine Translation with Cross-modal Alignment
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.689/
Yang, Zhe and Fang, Qingkai and Feng, Yang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10134--10146
How to achieve neural machine translation with limited parallel data? Existing techniques often rely on large-scale monolingual corpus, which is impractical for some low-resource languages. In this paper, we turn to connect several low-resource languages to a particular high-resource one by additional visual modality. Specifically, we propose a cross-modal contrastive learning method to learn a shared space for all languages, where both a coarse-grained sentence-level objective and a fine-grained token-level one are introduced. Experimental results and further analysis show that our method can effectively learn the cross-modal and cross-lingual alignment with a small amount of image-text pairs, and achieves significant improvements over the text-only baseline under both zero-shot and few-shot scenarios.
null
null
10.18653/v1/2022.emnlp-main.689
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,819
inproceedings
jia-zhang-2022-prompt
Prompt-based Distribution Alignment for Domain Generalization in Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.690/
Jia, Chen and Zhang, Yue
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10147--10157
Prompt-based learning (a.k.a. prompting) achieves high performance by bridging the gap between the objectives of language modeling and downstream tasks. Domain generalization ability can be improved by prompting since classification across different domains can be unified into the prediction of the same set of label words. The remaining challenge for domain generalization by prompting comes from discrepancies between the data distribution of different domains. To improve domain generalization with prompting, we learn distributional invariance across source domains via two alignment regularization loss functions. The first is vocabulary distribution alignment, which uses a Kullback-Leibler divergence regularization on source-domain vocabulary distributions. The second is feature distribution alignment, which uses a novel adversarial training strategy to learn domain invariant representation across source domains. Experiments on sentiment analysis and natural language inference show the effectiveness of our method and achieve state-of-the-art results on six datasets.
null
null
10.18653/v1/2022.emnlp-main.690
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,820
inproceedings
ghosal-etal-2022-two
Two is Better than Many? Binary Classification as an Effective Approach to Multi-Choice Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.691/
Ghosal, Deepanway and Majumder, Navonil and Mihalcea, Rada and Poria, Soujanya
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10158--10166
We propose a simple refactoring of multi-choice question answering (MCQA) tasks as a series of binary classifications. The MCQA task is generally performed by scoring each (question, answer) pair normalized over all the pairs, and then selecting the answer from the pair that yield the highest score. For n answer choices, this is equivalent to an n-class classification setup where only one class (true answer) is correct. We instead show that classifying (question, true answer) as positive instances and (question, false answer) as negative instances is significantly more effective across various models and datasets. We show the efficacy of our proposed approach in different tasks {--} abductive reasoning, commonsense question answering, science question answering, and sentence completion. Our DeBERTa binary classification model reaches the top or close to the top performance on public leaderboards for these tasks. The source code of the proposed approach is available at https://github.com/declare-lab/TEAM.
null
null
10.18653/v1/2022.emnlp-main.691
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,821
inproceedings
zhang-etal-2022-hegel
{HEGEL}: Hypergraph Transformer for Long Document Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.692/
Zhang, Haopeng and Liu, Xiao and Zhang, Jiawei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10167--10176
Extractive summarization for long documents is challenging due to the extended structured input context. The long-distance sentence dependency hinders cross-sentence relations modeling, the critical step of extractive summarization. This paper proposes HEGEL, a hypergraph neural network for long document summarization by capturing high-order cross-sentence relations. HEGEL updates and learns effective sentence representations with hypergraph transformer layers and fuses different types of sentence dependencies, including latent topics, keywords coreference, and section structure. We validate HEGEL by conducting extensive experiments on two benchmark datasets, and experimental results demonstrate the effectiveness and efficiency of HEGEL.
null
null
10.18653/v1/2022.emnlp-main.692
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,822
inproceedings
ke-etal-2022-adapting
Adapting a Language Model While Preserving its General Knowledge
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.693/
Ke, Zixuan and Shao, Yijia and Lin, Haowei and Xu, Hu and Shu, Lei and Liu, Bing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10177--10188
Domain-adaptive pre-training (or DA-training for short), also known as post-training, aimsto train a pre-trained general-purpose language model (LM) using an unlabeled corpus of aparticular domain to adapt the LM so that end-tasks in the domain can give improved performances. However, existing DA-training methods are in some sense blind as they do not explicitly identify what knowledge in the LM should be preserved and what should be changed by the domain corpus. This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge. Experimental results will demonstrate the effectiveness of the proposed approach.
null
null
10.18653/v1/2022.emnlp-main.693
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,823
inproceedings
li-etal-2022-human
Human Guided Exploitation of Interpretable Attention Patterns in Summarization and Topic Segmentation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.694/
Li, Raymond and Xiao, Wen and Xing, Linzi and Wang, Lanjun and Murray, Gabriel and Carenini, Giuseppe
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10189--10204
The multi-head self-attention mechanism of the transformer model has been thoroughly investigated recently. In one vein of study, researchers are interested in understanding why and how transformers work. In another vein, researchers propose new attention augmentation methods to make transformers more accurate, efficient and interpretable. In this paper, we combine these two lines of research in a human-in-the-loop pipeline to first discover important task-specific attention patterns. Then those patterns are injected, not only to smaller models, but also to the original model. The benefits of our pipeline and discovered patterns are demonstrated in two case studies with extractive summarization and topic segmentation. After discovering interpretable patterns in BERT-based models fine-tuned for the two downstream tasks, experiments indicate that when we inject the patterns into attention heads, the models show considerable improvements in accuracy and efficiency.
null
null
10.18653/v1/2022.emnlp-main.694
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,824
inproceedings
ke-etal-2022-continual
Continual Training of Language Models for Few-Shot Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.695/
Ke, Zixuan and Lin, Haowei and Shao, Yijia and Xu, Hu and Shu, Lei and Liu, Bing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10205--10216
Recent work on applying large language models (LMs) achieves impressive performance in many NLP applications. Adapting or posttraining an LM using an unlabeled domain corpus can produce even better performance for end-tasks in the domain. This paper proposes the problem of continually extending an LM by incrementally post-train the LM with a sequence of unlabeled domain corpora to expand its knowledge without forgetting its previous skills. The goal is to improve the few-shot end-task learning in these domains. The resulting system is called CPT (Continual PostTraining), which to our knowledge, is the first continual post-training system. Experimental results verify its effectiveness.
null
null
10.18653/v1/2022.emnlp-main.695
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,825
inproceedings
wu-etal-2022-dictionary
Dictionary-Assisted Supervised Contrastive Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.696/
Wu, Patrick Y. and Bonneau, Richard and Tucker, Joshua A. and Nagler, Jonathan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10217--10235
Text analysis in the social sciences often involves using specialized dictionaries to reason with abstract concepts, such as perceptions about the economy or abuse on social media. These dictionaries allow researchers to impart domain knowledge and note subtle usages of words relating to a concept(s) of interest. We introduce the dictionary-assisted supervised contrastive learning (DASCL) objective, allowing researchers to leverage specialized dictionaries when fine-tuning pretrained language models. The text is first keyword simplified: a common, fixed token replaces any word in the corpus that appears in the dictionary(ies) relevant to the concept of interest. During fine-tuning, a supervised contrastive objective draws closer the embeddings of the original and keyword-simplified texts of the same class while pushing further apart the embeddings of different classes. The keyword-simplified texts of the same class are more textually similar than their original text counterparts, which additionally draws the embeddings of the same class closer together. Combining DASCL and cross-entropy improves classification performance metrics in few-shot learning settings and social science applications compared to using cross-entropy alone and alternative contrastive and data augmentation methods.
null
null
10.18653/v1/2022.emnlp-main.696
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,826
inproceedings
mao-2022-fine
Fine-Tuning Pre-trained Transformers into Decaying Fast Weights
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.697/
Mao, Huanru Henry
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10236--10242
Autoregressive Transformers are strong language models but incur O(T) complexity during per-token generation due to the self-attention mechanism. Recent work proposes kernel-based methods to approximate causal self-attention by replacing it with recurrent formulations with various update rules and feature maps to achieve O(1) time and memory complexity. We explore these approaches and find that they are unnecessarily complex, and propose a simple alternative - decaying fast weights - that runs fast on GPU, outperforms prior methods, and retains 99{\%} of attention`s performance for GPT-2. We also show competitive performance on WikiText-103 against more complex attention substitutes.
null
null
10.18653/v1/2022.emnlp-main.697
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,827
inproceedings
bansal-etal-2022-pro
{PRO}-{CS} : An Instance-Based Prompt Composition Technique for Code-Switched Tasks
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.698/
Bansal, Srijan and Tripathi, Suraj and Agarwal, Sumit and Mitamura, Teruko and Nyberg, Eric
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10243--10255
Code-switched (CS) data is ubiquitous in today`s globalized world, but the dearth of annotated datasets in code-switching poses a significant challenge for learning diverse tasks across different language pairs. Parameter-efficient prompt-tuning approaches conditioned on frozen language models have shown promise for transfer learning in limited-resource setups. In this paper, we propose a novel instance-based prompt composition technique, PRO-CS, for CS tasks that combine language and task knowledge. We compare our approach with prompt-tuning and fine-tuning for code-switched tasks on 10 datasets across 4 language pairs. Our model outperforms the prompt-tuning approach by significant margins across all datasets and outperforms or remains at par with fine-tuning by using just 0.18{\%} of total parameters. We also achieve competitive results when compared with the fine-tuned model in the low-resource cross-lingual and cross-task setting, indicating the effectiveness of our approach to incorporate new code-switched tasks.
null
null
10.18653/v1/2022.emnlp-main.698
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,828
inproceedings
shen-etal-2022-sentbs
{S}ent{BS}: Sentence-level Beam Search for Controllable Summarization
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.699/
Shen, Chenhui and Cheng, Liying and Bing, Lidong and You, Yang and Si, Luo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10256--10265
A wide range of control perspectives have been explored in controllable text generation. Structure-controlled summarization is recently proposed as a useful and interesting research direction. However, current structure-controlling methods have limited effectiveness in enforcing the desired structure. To address this limitation, we propose a sentence-level beam search generation method (SentBS), where evaluation is conducted throughout the generation process to select suitable sentences for subsequent generations. We experiment with different combinations of decoding methods to be used as sub-components by SentBS and evaluate results on the structure-controlled dataset MReD. Experiments show that all explored combinations for SentBS can improve the agreement between the generated text and the desired structure, with the best method significantly reducing the structural discrepancies suffered by the existing model, by approximately 68{\%}.
null
null
10.18653/v1/2022.emnlp-main.699
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,829
inproceedings
zhao-etal-2022-fine-grained
A Fine-grained {C}hinese Software Privacy Policy Dataset for Sequence Labeling and Regulation Compliant Identification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.700/
Zhao, Kaifa and Yu, Le and Zhou, Shiyao and Li, Jing and Luo, Xiapu and Chiu, Yat Fei Aemon and Liu, Yutong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10266--10277
Privacy protection raises great attention on both legal levels and user awareness. To protect user privacy, countries enact laws and regulations requiring software privacy policies to regulate their behavior. However, privacy policies are written in professional languages with many legal terms and software jargon that prevent users from understanding and even reading them. It is necessary and urgent to use NLP techniques to analyze privacy policies. However, existing datasets ignore law requirements and are limited to English. In this paper, we construct the first Chinese privacy policy dataset, namely CA4P-483, to facilitate the sequence labeling tasks and regulation compliance identification between privacy policies and software. Our dataset includes 483 Chinese Android application privacy policies, over 11K sentences, and 52K fine-grained annotations. We evaluate families of robust and representative baseline models on our dataset. Based on baseline performance, we provide findings and potential research directions on our dataset. Finally, we investigate the potential applications of CA4P-483 combing regulation requirements and program analysis.
null
null
10.18653/v1/2022.emnlp-main.700
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,830
inproceedings
kim-kim-2022-saving
Saving Dense Retriever from Shortcut Dependency in Conversational Search
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.701/
Kim, Sungdong and Kim, Gangwoo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10278--10287
Conversational search (CS) needs a holistic understanding of conversational inputs to retrieve relevant passages. In this paper, we demonstrate the existence of a \textit{retrieval shortcut} in CS, which causes models to retrieve passages solely relying on partial history while disregarding the latest question. With in-depth analysis, we first show that naively trained dense retrievers heavily exploit the shortcut and hence perform poorly when asked to answer history-independent questions. To build more robust models against shortcut dependency, we explore various hard negative mining strategies. Experimental results show that training with the model-based hard negatives effectively mitigates the dependency on the shortcut, significantly improving dense retrievers on recent CS benchmarks. In particular, our retriever outperforms the previous state-of-the-art model by 11.0 in Recall@10 on QReCC.
null
null
10.18653/v1/2022.emnlp-main.701
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,831
inproceedings
hong-etal-2022-graph
Graph-Induced Transformers for Efficient Multi-Hop Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.702/
Hong, Giwon and Kim, Jeonghwan and Kang, Junmo and Myaeng, Sung-Hyon
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10288--10294
A graph is a suitable data structure to represent the structural information of text. Recently, multi-hop question answering (MHQA) tasks, which require inter-paragraph/sentence linkages, have come to exploit such properties of a graph. Previous approaches to MHQA relied on leveraging the graph information along with the pre-trained language model (PLM) encoders. However, this trend exhibits the following drawbacks: (i) sample inefficiency while training in a low-resource setting; (ii) lack of reusability due to changes in the model structure or input. Our work proposes the Graph-Induced Transformer (GIT) that applies graph-derived attention patterns directly into a PLM, without the need to employ external graph modules. GIT can leverage the useful inductive bias of graphs while retaining the unperturbed Transformer structure and parameters. Our experiments on HotpotQA successfully demonstrate both the sample efficient characteristic of GIT and its capacity to replace the graph modules while preserving model performance.
null
null
10.18653/v1/2022.emnlp-main.702
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,832
inproceedings
bhargava-ng-2022-discosense
{D}isco{S}ense: Commonsense Reasoning with Discourse Connectives
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.703/
Bhargava, Prajjwal and Ng, Vincent
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10295--10310
We present DiscoSense, a benchmark for commonsense reasoning via understanding a wide variety of discourse connectives. We generate compelling distractors in DiscoSense using Conditional Adversarial Filtering, an extension of Adversarial Filtering that employs conditional generation. We show that state-of-the-art pre-trained language models struggle to perform well on DiscoSense, which makes this dataset ideal for evaluating next-generation commonsense reasoning systems.
null
null
10.18653/v1/2022.emnlp-main.703
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,833
inproceedings
fan-etal-2022-boosting
Boosting Document-Level Relation Extraction by Mining and Injecting Logical Rules
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.704/
Fan, Shengda and Mo, Shasha and Niu, Jianwei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10311--10323
Document-level relation extraction (DocRE) aims at extracting relations of all entity pairs in a document. A key challenge to DocRE lies in the complex interdependency between the relations of entity pairs. Unlike most prior efforts focusing on implicitly powerful representations, the recently proposed LogiRE (Ru et al., 2021) explicitly captures the interdependency by learning logical rules. However, LogiRE requires extra parameterized modules to reason merely after training backbones, and this disjointed optimization of backbones and extra modules may lead to sub-optimal results. In this paper, we propose MILR, a logic enhanced framework that boosts DocRE by Mining and Injecting Logical Rules. MILR first mines logical rules from annotations based on frequencies. Then in training, consistency regularizationis leveraged as an auxiliary loss to penalize instances that violate mined rules. Finally, MILR infers from a global perspective based on integer programming. Compared with LogiRE, MILR does not introduce extra parameters and injects logical rules during both training and inference. Extensive experiments on two benchmarks demonstrate that MILR not only improves the relation extraction performance (1.1{\%}-3.8{\%} F1) but also makes predictions more logically consistent (over 4.5{\%} Logic). More importantly, MILR also consistently outperforms LogiRE on both counts. Code is available at https://github.com/XingYing-stack/MILR.
null
null
10.18653/v1/2022.emnlp-main.704
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,834
inproceedings
hu-etal-2022-mocha
{MOCHA}: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.705/
Hu, Zhe and Chan, Hou Pong and Huang, Lifu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10324--10334
Teaching neural models to generate narrative coherent texts is a critical problem. Recent pre-trained language models have achieved promising results, but there is still a gap between human written texts and machine-generated outputs. In this work, we propose a novel multi-task training strategy for long text generation grounded on the cognitive theory of writing, which empowers the model to learn essential subskills needed for writing including planning and reviewing besides end-to-end generation. We extensively evaluate our model on three open-ended generation tasks including story generation, news article writing and argument generation. Experiments show that our model achieves better results on both few-shot and fully-supervised settings than strong baselines, and human evaluations confirm that our model can generate more coherent outputs.
null
null
10.18653/v1/2022.emnlp-main.705
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,835
inproceedings
li-etal-2022-variational-autoencoder
Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.706/
Li, Zhuang and Qu, Lizhen and Xu, Qiongkai and Wu, Tongtong and Zhan, Tianyang and Haffari, Gholamreza
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10335--10356
In this paper, we propose a variational autoencoder with disentanglement priors, VAE-Dprior, for task-specific natural language generation with none or a handful of task-specific labeled examples. In order to tackle compositional generalization across tasks, our model performs disentangled representation learning by introducing a conditional prior for the latent content space and another conditional prior for the latent label space. Both types of priors satisfy a novel property called $\epsilon$-disentangled. We show both empirically and theoretically that the novel priors can disentangle representations even without specific regularizations as in the prior work. The content prior enables directly sampling diverse content representations from the content space learned from the seen tasks, and fuse them with the representations of novel tasks for generating semantically diverse texts in the low-resource settings. Our extensive experiments demonstrate the superior performance of our model over competitive baselines in terms of i) data augmentation in continuous zero/few-shot learning, and ii) text style transfer in the few-shot setting.
null
null
10.18653/v1/2022.emnlp-main.706
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,836
inproceedings
joshi-etal-2022-cislr
{CISLR}: Corpus for {I}ndian {S}ign {L}anguage Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.707/
Joshi, Abhinav and Bhat, Ashwani and S, Pradeep and Gole, Priya and Gupta, Shashwat and Agarwal, Shreyansh and Modi, Ashutosh
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10357--10366
Indian Sign Language, though used by a diverse community, still lacks well-annotated resources for developing systems that would enable sign language processing. In recent years researchers have actively worked for sign languages like American Sign Languages, however, Indian Sign language is still far from data-driven tasks like machine translation. To address this gap, in this paper, we introduce a new dataset CISLR (Corpus for Indian Sign Language Recognition) for word-level recognition in Indian Sign Language using videos. The corpus has a large vocabulary of around 4700 words covering different topics and domains. Further, we propose a baseline model for word recognition from sign language videos. To handle the low resource problem in the Indian Sign Language, the proposed model consists of a prototype-based one-shot learner that leverages resource rich American Sign Language to learn generalized features for improving predictions in Indian Sign Language. Our experiments show that gesture features learned in another sign language can help perform one-shot predictions in CISLR.
null
null
10.18653/v1/2022.emnlp-main.707
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,837
inproceedings
shen-etal-2022-mask
Mask the Correct Tokens: An Embarrassingly Simple Approach for Error Correction
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.708/
Shen, Kai and Leng, Yichong and Tan, Xu and Tang, Siliang and Zhang, Yuan and Liu, Wenjie and Lin, Edward
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10367--10380
Text error correction aims to correct the errors in text sequences such as those typed by humans or generated by speech recognition models.Previous error correction methods usually take the source (incorrect) sentence as encoder input and generate the target (correct) sentence through the decoder. Since the error rate of the incorrect sentence is usually low (e.g., 10{\%}), the correction model can only learn to correct on limited error tokens but trivially copy on most tokens (correct tokens), which harms the effective training of error correction. In this paper, we argue that the correct tokens should be better utilized to facilitate effective training and then propose a simple yet effective masking strategy to achieve this goal.Specifically, we randomly mask out a part of the correct tokens in the source sentence and let the model learn to not only correct the original error tokens but also predict the masked tokens based on their context information. Our method enjoys several advantages: 1) it alleviates trivial copy; 2) it leverages effective training signals from correct tokens; 3) it is a plug-and-play module and can be applied to different models and tasks. Experiments on spelling error correction and speech recognition error correction on Mandarin datasets and grammar error correction on English datasets with both autoregressive and non-autoregressive generation models show that our method improves the correctionaccuracy consistently.
null
null
10.18653/v1/2022.emnlp-main.708
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,838
inproceedings
hong-jang-2022-amal
{AMAL}: Meta Knowledge-Driven Few-Shot Adapter Learning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.709/
Hong, S. K. and Jang, Tae Young
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10381--10389
NLP has advanced greatly together with the proliferation of Transformer-based pre-trained language models. To adapt to a downstream task, the pre-trained language models need to be fine-tuned with a sufficient supply of annotated examples. In recent years, Adapter-based fine-tuning methods have expanded the applicability of pre-trained language models by substantially lowering the required amount of annotated examples. However, existing Adapter-based methods still fail to yield meaningful results in the few-shot regime where only a few annotated examples are provided. In this study, we present a meta-learning-driven low-rank adapter pooling method, called AMAL, for leveraging pre-trained language models even with just a few data points. We evaluate our method on five text classification benchmark datasets. The results show that AMAL significantly outperforms previous few-shot learning methods and achieves a new state-of-the-art.
null
null
10.18653/v1/2022.emnlp-main.709
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,839
inproceedings
ranjan-etal-2022-discourse
Discourse Context Predictability Effects in {H}indi Word Order
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.710/
Ranjan, Sidharth and van Schijndel, Marten and Agarwal, Sumeet and Rajkumar, Rajakrishnan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10390--10406
We test the hypothesis that discourse predictability influences Hindi syntactic choice. While prior work has shown that a number of factors (e.g., information status, dependency length, and syntactic surprisal) influence Hindi word order preferences, the role of discourse predictability is underexplored in the literature. Inspired by prior work on syntactic priming, we investigate how the words and syntactic structures in a sentence influence the word order of the following sentences. Specifically, we extract sentences from the Hindi-Urdu Treebank corpus (HUTB), permute the preverbal constituents of those sentences, and build a classifier to predict which sentences actually occurred in the corpus against artificially generated distractors. The classifier uses a number of discourse-based features and cognitive features to make its predictions, including dependency length, surprisal, and information status. We find that information status and LSTM-based discourse predictability influence word order choices, especially for non-canonical object-fronted orders. We conclude by situating our results within the broader syntactic priming literature.
null
null
10.18653/v1/2022.emnlp-main.710
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,840
inproceedings
kolluru-etal-2022-covid
{\textquotedblleft}Covid vaccine is against Covid but {O}xford vaccine is made at {O}xford!{\textquotedblright} Semantic Interpretation of Proper Noun Compounds
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.711/
Kolluru, Keshav and Stanovsky, Gabriel and -, Mausam
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10407--10420
Proper noun compounds, e.g., {\textquotedblleft}Covid vaccine{\textquotedblright}, convey information in a succinct manner (a {\textquotedblleft}Covid vaccine{\textquotedblright} is a {\textquotedblleft}vaccine that immunizes against the Covid disease{\textquotedblright}). These are commonly used in short-form domains, such as news headlines, but are largely ignored in information-seeking applications. To address this limitation, we release a new manually annotated dataset, ProNCI, consisting of 22.5K proper noun compounds along with their free-form semantic interpretations. ProNCI is 60 times larger than prior noun compound datasets and also includes non-compositional examples, which have not been previously explored. We experiment with various neural models for automatically generating the semantic interpretations from proper noun compounds, ranging from few-shot prompting to supervised learning, with varying degrees of knowledge about the constituent nouns. We find that adding targeted knowledge, particularly about the common noun, results in performance gains of upto 2.8{\%}. Finally, we integrate our model generated interpretations with an existing Open IE system and observe an 7.5{\%} increase in yield at a precision of 85{\%}. The dataset and code are available at https://github.com/dair-iitd/pronci.
null
null
10.18653/v1/2022.emnlp-main.711
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,841
inproceedings
kuribayashi-etal-2022-context
Context Limitations Make Neural Language Models More Human-Like
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.712/
Kuribayashi, Tatsuki and Oseki, Yohei and Brassard, Ana and Inui, Kentaro
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10421--10436
Language models (LMs) have been used in cognitive modeling as well as engineering studies{---}they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading.This study highlights a limitation of modern neural LMs as the model of choice for this purpose: there is a discrepancy between their context access capacities and that of humans.Our results showed that constraining the LMs' context access improved their simulation of human reading behavior.We also showed that LM-human gaps in context access were associated with specific syntactic constructions; incorporating syntactic biases into LMs' context access might enhance their cognitive plausibility.
null
null
10.18653/v1/2022.emnlp-main.712
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,842
inproceedings
bao-etal-2022-generative
A Generative Model for End-to-End Argument Mining with Reconstructed Positional Encoding and Constrained Pointer Mechanism
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.713/
Bao, Jianzhu and He, Yuhang and Sun, Yang and Liang, Bin and Du, Jiachen and Qin, Bing and Yang, Min and Xu, Ruifeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10437--10449
Argument mining (AM) is a challenging task as it requires recognizing the complex argumentation structures involving multiple subtasks.To handle all subtasks of AM in an end-to-end fashion, previous works generally transform AM into a dependency parsing task.However, such methods largely require complex pre- and post-processing to realize the task transformation.In this paper, we investigate the end-to-end AM task from a novel perspective by proposing a generative framework, in which the expected outputs of AM are framed as a simple target sequence. Then, we employ a pre-trained sequence-to-sequence language model with a constrained pointer mechanism (CPM) to model the clues for all the subtasks of AM in the light of the target sequence. Furthermore, we devise a reconstructed positional encoding (RPE) to alleviate the order biases induced by the autoregressive generation paradigm.Experimental results show that our proposed framework achieves new state-of-the-art performance on two AM benchmarks.
null
null
10.18653/v1/2022.emnlp-main.713
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,843
inproceedings
zhou-etal-2022-reflect
Reflect, Not Reflex: Inference-Based Common Ground Improves Dialogue Response Quality
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.714/
Zhou, Pei and Cho, Hyundong and Jandaghi, Pegah and Lee, Dong-Ho and Lin, Bill Yuchen and Pujara, Jay and Ren, Xiang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10450--10468
Human communication relies on common ground (CG), the mutual knowledge and beliefs shared by participants, to produce coherent and interesting conversations. In this paper, we demonstrate that current response generation (RG) models produce generic and dull responses in dialogues because they act reflexively, failing to explicitly model CG, both due to the lack of CG in training data and the standard RG training procedure. We introduce Reflect, a dataset that annotates dialogues with explicit CG (materialized as inferences approximating shared knowledge and beliefs) and solicits 9k diverse human-generated responses each following one common ground. Using Reflect, we showcase the limitations of current dialogue data and RG models: less than half of the responses in current data is rated as high quality (sensible, specific, and interesting) and models trained using this data have even lower quality, while most Reflect responses are judged high quality. Next, we analyze whether CG can help models produce better quality responses by using Reflect CG to guide RG models. Surprisingly, we find that simply prompting GPT3 to {\textquotedblleft}think{\textquotedblright} about CG generates 30{\%} more quality responses, showing promising benefits to integrating CG into the RG process.
null
null
10.18653/v1/2022.emnlp-main.714
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,844
inproceedings
zhao-etal-2022-floweval
{F}low{E}val: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.715/
Zhao, Jianqiao and Li, Yanyang and Du, Wanyu and Ji, Yangfeng and Yu, Dong and Lyu, Michael and Wang, Liwei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10469--10483
Despite recent progress in open-domain dialogue evaluation, how to develop automatic metrics remains an open problem. We explore the potential of dialogue evaluation featuring dialog act information, which was hardly explicitly modeled in previous methods. However, defined at the utterance level in general, dialog act is of coarse granularity, as an utterance can contain multiple segments possessing different functions. Hence, we propose segment act, an extension of dialog act from utterance level to segment level, and crowdsource a large-scale dataset for it. To utilize segment act flows, sequences of segment acts, for evaluation, we develop the first consensus-based dialogue evaluation framework, FlowEval. This framework provides a reference-free approach for dialog evaluation by finding pseudo-references. Extensive experiments against strong baselines on three benchmark datasets demonstrate the effectiveness and other desirable characteristics of our FlowEval, pointing out a potential path for better dialogue evaluation.
null
null
10.18653/v1/2022.emnlp-main.715
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,845
inproceedings
mirchandani-etal-2022-fad
{F}a{D}-{VLP}: Fashion Vision-and-Language Pre-training towards Unified Retrieval and Captioning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.716/
Mirchandani, Suvir and Yu, Licheng and Wang, Mengjiao and Sinha, Animesh and Jiang, Wenwen and Xiang, Tao and Zhang, Ning
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10484--10497
Multimodal tasks in the fashion domain have significant potential for e-commerce, but involve challenging vision-and-language learning problems{---}e.g., retrieving a fashion item given a reference image plus text feedback from a user. Prior works on multimodal fashion tasks have either been limited by the data in individual benchmarks, or have leveraged generic vision-and-language pre-training but have not taken advantage of the characteristics of fashion data. Additionally, these works have mainly been restricted to multimodal understanding tasks. To address these gaps, we make two key contributions. First, we propose a novel fashion-specific pre-training framework based on weakly-supervised triplets constructed from fashion image-text pairs. We show the triplet-based tasks are an effective addition to standard multimodal pre-training tasks. Second, we propose a flexible decoder-based model architecture capable of both fashion retrieval and captioning tasks. Together, our model design and pre-training approach are competitive on a diverse set of fashion tasks, including cross-modal retrieval, image retrieval with text feedback, image captioning, relative image captioning, and multimodal categorization.
null
null
10.18653/v1/2022.emnlp-main.716
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,846
inproceedings
han-etal-2022-mm
{MM}-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.717/
Han, Wei and Chen, Hui and Kan, Min-Yen and Poria, Soujanya
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10498--10511
Existing multimodal tasks mostly target at the complete input modality setting, i.e., each modality is either complete or completely missing in both training and test sets. However, the randomly missing situations have still been underexplored. In this paper, we present a novel approach named MM-Align to address the missing-modality inference problem. Concretely, we propose 1) an alignment dynamics learning module based on the theory of optimal transport (OT) for missing data imputation; 2) a denoising training algorithm to enhance the quality of imputation as well as the accuracy of model predictions. Compared with previous generative methods which devote to restoring the missing inputs, MM-Align learns to capture and imitate the alignment dynamics between modality sequences. Results of comprehensive experiments on two multimodal tasks empirically demonstrate that our method can perform more accurate and faster inference and alleviate the overfitting issue under different missing conditions.
null
null
10.18653/v1/2022.emnlp-main.717
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,847
inproceedings
moon-etal-2022-evaluating
Evaluating the Knowledge Dependency of Questions
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.718/
Moon, Hyeongdon and Yang, Yoonseok and Yu, Hangyeol and Lee, Seunghyun and Jeong, Myeongho and Park, Juneyoung and Shin, Jamin and Kim, Minsam and Choi, Seungtaek
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10512--10526
The automatic generation of Multiple Choice Questions (MCQ) has the potential to reduce the time educators spend on student assessment significantly. However, existing evaluation metrics for MCQ generation, such as BLEU, ROUGE, and METEOR, focus on the n-gram based similarity of the generated MCQ to the gold sample in the dataset and disregard their educational value.They fail to evaluate the MCQ`s ability to assess the student`s knowledge of the corresponding target fact. To tackle this issue, we propose a novel automatic evaluation metric, coined Knowledge Dependent Answerability (KDA), which measures the MCQ`s answerability given knowledge of the target fact. Specifically, we first show how to measure KDA based on student responses from a human survey.Then, we propose two automatic evaluation metrics, KDA{\_}disc and KDA{\_}cont, that approximate KDA by leveraging pre-trained language models to imitate students' problem-solving behavior.Through our human studies, we show that KDA{\_}disc and KDA{\_}soft have strong correlations with both (1) KDA and (2) usability in an actual classroom setting, labeled by experts. Furthermore, when combined with n-gram based similarity metrics, KDA{\_}disc and KDA{\_}cont are shown to have a strong predictive power for various expert-labeled MCQ quality measures.
null
null
10.18653/v1/2022.emnlp-main.718
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,848
inproceedings
zhao-etal-2022-mose
{M}o{SE}: Modality Split and Ensemble for Multimodal Knowledge Graph Completion
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.719/
Zhao, Yu and Cai, Xiangrui and Wu, Yike and Zhang, Haiwei and Zhang, Ying and Zhao, Guoqing and Jiang, Ning
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10527--10536
Multimodal knowledge graph completion (MKGC) aims to predict missing entities in MKGs. Previous works usually share relation representation across modalities. This results in mutual interference between modalities during training, since for a pair of entities, the relation from one modality probably contradicts that from another modality. Furthermore, making a unified prediction based on the shared relation representation treats the input in different modalities equally, while their importance to the MKGC task should be different. In this paper, we propose MoSE, a Modality Split representation learning and Ensemble inference framework for MKGC. Specifically, in the training phase, we learn modality-split relation embeddings for each modality instead of a single modality-shared one, which alleviates the modality interference. Based on these embeddings, in the inference phase, we first make modality-split predictions and then exploit various ensemble methods to combine the predictions with different weights, which models the modality importance dynamically. Experimental results on three KG datasets show that MoSE outperforms state-of-the-art MKGC methods. Codes are available at https://github.com/OreOZhao/MoSE4MKGC.
null
null
10.18653/v1/2022.emnlp-main.719
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,849
inproceedings
huang-etal-2022-entropy
Entropy-Based Vocabulary Substitution for Incremental Learning in Multilingual Neural Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.720/
Huang, Kaiyu and Li, Peng and Ma, Jin and Liu, Yang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10537--10550
In a practical real-world scenario, the longstanding goal is that a universal multilingual translation model can be incrementally updated when new language pairs arrive. Specifically, the initial vocabulary only covers some of the words in new languages, which hurts the translation quality for incremental learning. Although existing approaches attempt to address this issue by replacing the original vocabulary with a rebuilt vocabulary or constructing independent language-specific vocabularies, these methods can not meet the following three demands simultaneously: (1) High translation quality for original and incremental languages, (2) low cost for model training, (3) low time overhead for preprocessing. In this work, we propose an entropy-based vocabulary substitution (EVS) method that just needs to walk through new language pairs for incremental learning in a large-scale multilingual data updating while remaining the size of the vocabulary. Our method has access to learn new knowledge from updated training samples incrementally while keeping high translation quality for original language pairs, alleviating the issue of catastrophic forgetting. Results of experiments show that EVS can achieve better performance and save excess overhead for incremental learning in the multilingual machine translation task.
null
null
10.18653/v1/2022.emnlp-main.720
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,850
inproceedings
li-etal-2022-eliciting
Eliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded Conversation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.721/
Li, Yanyang and Zhao, Jianqiao and Lyu, Michael and Wang, Liwei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10551--10564
Recent advances in large-scale pre-training provide large models with the potential to learn knowledge from the raw text. It is thus natural to ask whether it is possible to leverage these large models as knowledge bases for downstream tasks. In this work, we answer the aforementioned question in unsupervised knowledge-grounded conversation. We explore various methods that best elicit knowledge from large models. Our human study indicates that, though hallucinations exist, large models post the unique advantage of being able to output common sense and summarize facts that cannot be directly retrieved from the search engine. To better exploit such generated knowledge in dialogue generation, we treat the generated knowledge as a noisy knowledge source and propose the posterior-based reweighing as well as the noisy training strategy. Empirical results on two benchmarks show advantages over the state-of-the-art methods.
null
null
10.18653/v1/2022.emnlp-main.721
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,851
inproceedings
goel-etal-2022-unsupervised
An Unsupervised, Geometric and Syntax-aware Quantification of Polysemy
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.722/
Goel, Anmol and Sharma, Charu and Kumaraguru, Ponnurangam
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10565--10574
Polysemy is the phenomenon where a single word form possesses two or more related senses. It is an extremely ubiquitous part of natural language and analyzing it has sparked rich discussions in the linguistics, psychology and philosophy communities alike. With scarce attention paid to polysemy in computational linguistics, and even scarcer attention toward quantifying polysemy, in this paper, we propose a novel, unsupervised framework to compute and estimate polysemy scores for words in multiple languages. We infuse our proposed quantification with syntactic knowledge in the form of dependency structures. This informs the final polysemy scores of the lexicon motivated by recent linguistic findings that suggest there is an implicit relation between syntax and ambiguity/polysemy. We adopt a graph based approach by computing the discrete Ollivier Ricci curvature on a graph of the contextual nearest neighbors. We test our framework on curated datasets controlling for different sense distributions of words in 3 typologically diverse languages - English, French and Spanish. The effectiveness of our framework is demonstrated by significant correlations of our quantification with expert human annotated language resources like WordNet. We observe a 0.3 point increase in the correlation coefficient as compared to previous quantification studies in English. Our research leverages contextual language models and syntactic structures to empirically support the widely held theoretical linguistic notion that syntax is intricately linked to ambiguity/polysemy.
null
null
10.18653/v1/2022.emnlp-main.722
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,852
inproceedings
sun-etal-2022-reorder
Reorder and then Parse, Fast and Accurate Discontinuous Constituency Parsing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.723/
Sun, Kailai and Li, Zuchao and Zhao, Hai
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10575--10588
Discontinuous constituency parsing is still kept developing for its efficiency and accuracy are far behind its continuous counterparts. Motivated by the observation that a discontinuous constituent tree can be simply transformed into a pseudo-continuous one by artificially reordering words in the sentence, we propose a novel reordering method, thereby construct fast and accurate discontinuous constituency parsing systems working in continuous way. Specifically, we model the relative position changes of words as a list of actions. By parsing and performing this actions, the corresponding pseudo-continuous sequence is derived. Discontinuous parse tree can be further inferred via integrating a high-performance pseudo-continuous constituency parser. Our systems are evaluated on three classical discontinuous constituency treebanks, achieving new state-of-the-art on two treebanks and showing a distinct advantage in speed.
null
null
10.18653/v1/2022.emnlp-main.723
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,853
inproceedings
goldsack-etal-2022-making
Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.724/
Goldsack, Tomas and Zhang, Zhihao and Lin, Chenghua and Scarton, Carolina
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10589--10604
Lay summarisation aims to jointly summarise and simplify a given text, thus making its content more comprehensible to non-experts.Automatic approaches for lay summarisation can provide significant value in broadening access to scientific literature, enabling a greater degree of both interdisciplinary knowledge sharing and public understanding when it comes to research findings. However, current corpora for this task are limited in their size and scope, hindering the development of broadly applicable data-driven approaches. Aiming to rectify these issues, we present two novel lay summarisation datasets, PLOS (large-scale) and eLife (medium-scale), each of which contains biomedical journal articles alongside expert-written lay summaries.We provide a thorough characterisation of our lay summaries, highlighting differing levels of readability and abstractivenessbetween datasets that can be leveraged to support the needs of different applications.Finally, we benchmark our datasets using mainstream summarisation approaches and perform a manual evaluation with domain experts, demonstrating their utility and casting light on the key challenges of this task.
null
null
10.18653/v1/2022.emnlp-main.724
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,854
inproceedings
rajaee-etal-2022-looking
Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.725/
Rajaee, Sara and Yaghoobzadeh, Yadollah and Pilehvar, Mohammad Taher
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10605--10616
It has been shown that NLI models are usually biased with respect to the word-overlap between the premise and the hypothesis, as they take this feature as a primary cue for predicting the entailment label. In this paper, we focus on an overlooked aspect of the overlap bias in the NLI models: the reverse word-overlap bias. Our experimental results demonstrate that current NLI systems are also highly biased towards the non-entailment label on instances with low overlap and that existing debiasing methods, which are reportedly successful on challenge datasets, are generally ineffective in addressing this category of bias.Through a set of analyses, we investigate the reasons for the emergence of the overlap bias and the role of minority examples in mitigating this bias.For the former, we find that the word overlap bias does not stem from pre-training, and in the latter, we observe that in contrast to the accepted assumption, eliminating minority examples does not affect the generalizability of debiasing methods with respect to the overlap bias.
null
null
10.18653/v1/2022.emnlp-main.725
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,855
inproceedings
akbartajari-etal-2022-empirical
An Empirical Study on the Transferability of Transformer Modules in Parameter-efficient Fine-tuning
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.726/
AkbarTajari, Mohammad and Rajaee, Sara and Pilehvar, Mohammad Taher
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10617--10625
Parameter-efficient fine-tuning has garnered lots of attention in recent studies.On this subject, we investigate the capability of different transformer modules in transferring knowledge from a pre-trained model to a downstream task. Our empirical results suggest that every transformer module is a winning ticket such that fine-tuning the specific module while the rest of the network is frozen achieves a comparable performance to the full fine-tuning case. Among different modules in LMs, LayerNorms exhibit a significant capacity for transfer learning to the extent that with only 0.003{\%} updateable parameters in the layer-wise analysis, they can show acceptable performance on various target tasks.We argue that the performance of LayerNorms could be attributed to their high-magnitude weights compared to other components in a pre-trained model.
null
null
10.18653/v1/2022.emnlp-main.726
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,856
inproceedings
zerveas-etal-2022-coder
{CODER}: An efficient framework for improving retrieval through {CO}ntextual Document Embedding Reranking
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.727/
Zerveas, George and Rekabsaz, Navid and Cohen, Daniel and Eickhoff, Carsten
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10626--10644
Contrastive learning has been the dominant approach to training dense retrieval models. In this work, we investigate the impact of ranking context - an often overlooked aspect of learning dense retrieval models. In particular, we examine the effect of its constituent parts: jointly scoring a large number of negatives per query, using retrieved (query-specific) instead of random negatives, and a fully list-wise loss.To incorporate these factors into training, we introduce Contextual Document Embedding Reranking (CODER), a highly efficient retrieval framework. When reranking, it incurs only a negligible computational overhead on top of a first-stage method at run time (approx. 5 ms delay per query), allowing it to be easily combined with any state-of-the-art dual encoder method. Models trained through CODER can also be used as stand-alone retrievers.Evaluating CODER in a large set of experiments on the MS MARCO and TripClick collections, we show that the contextual reranking of precomputed document embeddings leads to a significant improvement in retrieval performance. This improvement becomes even more pronounced when more relevance information per query is available, shown in the TripClick collection, where we establish new state-of-the-art results by a large margin.
null
null
10.18653/v1/2022.emnlp-main.727
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,857
inproceedings
chen-etal-2022-adaptershare
{A}dapter{S}hare: Task Correlation Modeling with Adapter Differentiation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.728/
Chen, Zhi and Chen, Bei and Chen, Lu and Yu, Kai and Lou, Jian-Guang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10645--10651
Thanks to the development of pre-trained language models, multitask learning (MTL) methods achieve a great success in natural language understanding area.However, current MTL methods pay more attention to task selection or model design to fuse as much knowledge as possible, while intrinsic task correlation is often neglected. It is important to learn sharing strategy among multiple tasks rather than sharing everything.{\%}The MTL model is directly shared among all the tasks. {\%}For example, in traditional MTL methods, the last classification layers or the decoder layers are manually separated. More deeply, In this paper, we propose AdapterShare, an adapter differentiation method to explicitly model the task correlation among multiple tasks. AdapterShare is automatically learned based on the gradients on tiny held-out validation data. Compared to single-task learning and fully shared MTL methods, our proposed method obtains obvious performance improvement. Compared to the existing MTL method AdapterFusion, AdapterShare achieves absolute 1.90 average points improvement on five dialogue understanding tasks and 2.33 points gain on NLU tasks.
null
null
10.18653/v1/2022.emnlp-main.728
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,858
inproceedings
liu-etal-2022-rethinking-task
Rethinking Task-Specific Knowledge Distillation: Contextualized Corpus as Better Textbook
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.729/
Liu, Chang and Tao, Chongyang and Liang, Jianxin and Shen, Tao and Feng, Jiazhan and Huang, Quzhe and Zhao, Dongyan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10652--10658
Knowledge distillation has been proven effective when customizing small language models for specific tasks. Here, a corpus as {\textquoteleft}textbook' plays an indispensable role, only through which the teacher can teach the student. Prevailing methods adopt a two-stage distillation paradigm: general distillation first with task-agnostic general corpus and task-specific distillation next with augmented task-specific corpus. We argue that such a paradigm may not be optimal. In general distillation, it`s extravagant to let the diverse but desultory general knowledge overwhelms the limited model capacity of the student. While in task-specific distillation, the task corpus is usually limited and narrow, preventing the student from learning enough knowledge. To mitigate the issues in the two gapped corpora, we present a better textbook for the student to learn: contextualized corpus that contextualizes task corpus with large-scale general corpus through relevance-based text retrieval. Experimental results on GLUE benchmark demonstrate that contextualized corpus is the better textbook compared with jointly using general corpus and augmented task-specific corpus. Surprisingly, it enables task-specific distillation from scratch without general distillation while maintaining comparable performance, making it more flexible to customize the student model with desired model size under various computation constraints.
null
null
10.18653/v1/2022.emnlp-main.729
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,859
inproceedings
shen-etal-2022-recovering
Recovering Gold from Black Sand: Multilingual Dense Passage Retrieval with Hard and False Negative Samples
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.730/
Shen, Tianhao and Liu, Mingtong and Zhou, Ming and Xiong, Deyi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10659--10670
Negative samples have not been efficiently explored in multilingual dense passage retrieval. In this paper, we propose a novel multilingual dense passage retrieval framework, mHFN, to recover and utilize hard and false negative samples. mHFN consists of three key components: 1) a multilingual hard negative sample augmentation module that allows knowledge of indistinguishable passages to be shared across multiple languages and synthesizes new hard negative samples by interpolating representations of queries and existing hard negative samples, 2) a multilingual negative sample cache queue that stores negative samples from previous batches in each language to increase the number of multilingual negative samples used in training beyond the batch size limit, and 3) a lightweight adaptive false negative sample filter that uses generated pseudo labels to separate unlabeled false negative samples and converts them into positive passages in training. We evaluate mHFN on Mr. TyDi, a high-quality multilingual dense passage retrieval dataset covering eleven typologically diverse languages, and experimental results show that mHFN outperforms strong sparse, dense and hybrid baselines and achieves new state-of-the-art performance on all languages. Our source code is available at https://github.com/Magnetic2014/mHFN.
null
null
10.18653/v1/2022.emnlp-main.730
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,860
inproceedings
plank-2022-problem
The {\textquotedblleft}Problem{\textquotedblright} of Human Label Variation: On Ground Truth in Data, Modeling and Evaluation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.731/
Plank, Barbara
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10671--10682
Human variation in labeling is often considered noise. Annotation projects for machine learning (ML) aim at minimizing human label variation, with the assumption to maximize data quality and in turn optimize and maximize machine learning metrics. However, thisconventional practice assumes that there exists a *ground truth*, and neglects that there exists genuine human variation in labeling due to disagreement, subjectivity in annotation or multiple plausible answers.In this position paper, we argue that this big open problem of \textit{human label variation} persists and critically needs more attention to move our field forward. This is because human label variation impacts all stages of the ML pipeline: *data, modeling and evaluation*. However, few works consider all of these dimensions jointly; and existing research is fragmented. We reconcile different previously proposed notions of human label variation, provide a repository of publicly-available datasets with un-aggregated labels, depict approaches proposed so far, identify gaps and suggest ways forward. As datasets are becoming increasingly available, we hope that this synthesized view on the {\textquotedblleft}problem{\textquotedblright} will lead to an open discussion on possible strategies to devise fundamentally new directions.
null
null
10.18653/v1/2022.emnlp-main.731
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,861
inproceedings
jain-etal-2022-quality
Quality Scoring of Source Words in Neural Translation Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.732/
Jain, Priyesh and Sarawagi, Sunita and Tomar, Tushar
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10683--10691
Word-level quality scores on input source sentences can provide useful feedback to an end-user when translating into an unfamiliar target language. Recent approaches either require training special word-scoring models based on synthetic data or require repeated invocation of the translation model. We propose a simple approach based on comparing the difference of probabilities from two language models. The basic premise of our method is to reason how well each source word is explained by the target sentence as against the source language model. Our approach provides up to five points higher F1 scores and is significantly faster than the state of the art methods on three language pairs. Also, our method does not require training any new model. We release a public dataset on word omissions and mistranslations on a new language pair.
null
null
10.18653/v1/2022.emnlp-main.732
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,862
inproceedings
lee-etal-2022-pneg
Pneg: Prompt-based Negative Response Generation for Dialogue Response Selection Task
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.733/
Lee, Nyoungwoo and Park, ChaeHun and Choi, Ho-Jin and Choo, Jaegul
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10692--10703
In retrieval-based dialogue systems, a response selection model acts as a ranker to select the most appropriate response among several candidates. However, such selection models tend to rely on context-response content similarity, which makes models vulnerable to adversarial responses that are semantically similar but not relevant to the dialogue context. Recent studies have shown that leveraging these adversarial responses as negative training samples is useful for improving the discriminating power of the selection model. Nevertheless, collecting human-written adversarial responses is expensive, and existing synthesizing methods often have limited scalability. To overcome these limitations, this paper proposes a simple but efficient method for generating adversarial negative responses leveraging a large-scale language model. Experimental results on dialogue selection tasks show that our method outperforms other methods of synthesizing adversarial negative responses. These results suggest that our method can be an effective alternative to human annotators in generating adversarial responses. Our code and dataset will be released if the paper is accepted.
null
null
10.18653/v1/2022.emnlp-main.733
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,863
inproceedings
long-webber-2022-facilitating
Facilitating Contrastive Learning of Discourse Relational Senses by Exploiting the Hierarchy of Sense Relations
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.734/
Long, Wanqiu and Webber, Bonnie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10704--10716
Implicit discourse relation recognition is a challenging task that involves identifying the sense or senses that hold between two adjacent spans of text, in the absense of an explicit connective between them. In both PDTB-2 (prasad et al., 2008) and PDTB-3 (Webber et al., 2019), discourse relational senses are organized into a three-level hierarchy ranging from four broad top-level senses, to more specific senses below them. Most previous work on implicitf discourse relation recognition have used the sense hierarchy simply to indicate what sense labels were available. Here we do more {---} incorporating the sense hierarchy into the recognition process itself and using it to select the negative examples used in contrastive learning. With no additional effort, the approach achieves state-of-the-art performance on the task. Our code is released inhttps://github.com/wanqiulong 0923/Contrastive{\_}IDRR.
null
null
10.18653/v1/2022.emnlp-main.734
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,864
inproceedings
zheng-etal-2022-simplified
Simplified Graph Learning for Inductive Short Text Classification
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.735/
Zheng, Kaixin and Wang, Yaqing and Yao, Quanming and Dou, Dejing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10717--10724
Short text classification (STC) is hard as short texts lack context information and labeled data is not enough. Graph neural networks obtain the state-of-the-art on STC since they can merge various auxiliary information via the message passing framework. However, existing works conduct transductive learning, which requires retraining to accommodate new samples and takes large memory. In this paper, we present SimpleSTC which handles inductive STC problem but only leverages words. We construct word graph from an external large corpus to compensate for the lack of semantic information, and learn text graph to handle the lack of labeled data. Results show that SimpleSTC obtains state-of-the-art performance with lower memory consumption and faster inference speed.
null
null
10.18653/v1/2022.emnlp-main.735
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,865
inproceedings
schmidt-etal-2022-dont
Don`t Stop Fine-Tuning: On Training Regimes for Few-Shot Cross-Lingual Transfer with Multilingual Language Models
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.736/
Schmidt, Fabian David and Vuli{\'c}, Ivan and Glava{\v{s}}, Goran
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10725--10742
A large body of recent work highlights the fallacies of zero-shot cross-lingual transfer (ZS-XLT) with large multilingual language models. Namely, their performance varies substantially for different target languages and is the weakest where needed the most: for low-resource languages distant to the source language. One remedy is few-shot transfer (FS-XLT), where leveraging only a few task-annotated instances in the target language(s) may yield sizable performance gains. However, FS-XLT also succumbs to large variation, as models easily overfit to the small datasets. In this work, we present a systematic study focused on a spectrum of FS-XLT fine-tuning regimes, analyzing key properties such as effectiveness, (in)stability, and modularity. We conduct extensive experiments on both higher-level (NLI, paraphrasing) and lower-level tasks (NER, POS), presenting new FS-XLT strategies that yield both improved and more stable FS-XLT across the board. Our findings challenge established FS-XLT methods: e.g., we propose to replace sequential fine-tuning with joint fine-tuning on source and target language instances, offering consistent gains with different number of shots (including resource-rich scenarios). We also show that further gains can be achieved with multi-stage FS-XLT training in which joint multilingual fine-tuning precedes the bilingual source-target specialization.
null
null
10.18653/v1/2022.emnlp-main.736
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,866
inproceedings
han-etal-2022-towards
Towards Compositional Generalization in Code Search
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.737/
Han, Hojae and Hwang, Seung-won and Lu, Shuai and Duan, Nan and Choi, Seungtaek
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10743--10750
We study compositional generalization, which aims to generalize on unseen combinations of seen structural elements, for code search. Unlike existing approaches of partially pursuing this goal, we study how to extract structural elements, which we name a template that directly targets compositional generalization. Thus we propose CTBERT, or Code Template BERT, representing codes using automatically extracted templates as building blocks. We empirically validate CTBERT on two public code search benchmarks, AdvTest and CSN. Further, we show that templates are complementary to data flow graphs in GraphCodeBERT, by enhancing structural context around variables.
null
null
10.18653/v1/2022.emnlp-main.737
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,867
inproceedings
wu-etal-2022-towards-relation
Towards relation extraction from speech
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.738/
Wu, Tongtong and Wang, Guitao and Zhao, Jinming and Liu, Zhaoran and Qi, Guilin and Li, Yuan-Fang and Haffari, Gholamreza
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10751--10762
Relation extraction typically aims to extract semantic relationships between entities from the unstructured text.One of the most essential data sources for relation extraction is the spoken language, such as interviews and dialogues.However, the error propagation introduced in automatic speech recognition (ASR) has been ignored in relation extraction, and the end-to-end speech-based relation extraction method has been rarely explored.In this paper, we propose a new listening information extraction task, i.e., speech relation extraction.We construct the training dataset for speech relation extraction via text-to-speech systems, and we construct the testing dataset via crowd-sourcing with native English speakers.We explore speech relation extraction via two approaches: the pipeline approach conducting text-based extraction with a pretrained ASR module, and the end2end approach via a new proposed encoder-decoder model, or what we called SpeechRE.We conduct comprehensive experiments to distinguish the challenges in speech relation extraction, which may shed light on future explorations. We share the code and data on https://github.com/wutong8023/SpeechRE.
null
null
10.18653/v1/2022.emnlp-main.738
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,868
inproceedings
raghu-etal-2022-structural
Structural Constraints and Natural Language Inference for End-to-End Flowchart Grounded Dialog Response Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.739/
Raghu, Dinesh and Joshi, Suraj and Joshi, Sachindra and -, Mausam
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10763--10774
Flowchart grounded dialog systems converse with users by following a given flowchart and a corpus of FAQs. The existing state-of-the-art approach (Raghu et al, 2021) for learning such a dialog system, named FLONET, has two main limitations. (1) It uses a Retrieval Augmented Generation (RAG) framework which represents a flowchart as a bag of nodes. By doing so, it loses the connectivity structure between nodes that can aid in better response generation. (2) Typically dialogs progress with the agent asking polar (Y/N) questions, but users often respond indirectly without the explicit use of polar words. In such cases, it fails to understand the correct polarity of the answer. To overcome these issues, we propose Structure-Aware FLONET (SA-FLONET) which infuses structural constraints derived from the connectivity structure of flowcharts into the RAG framework. It uses natural language inference to better predict the polarity of indirect Y/N answers. We find that SA-FLONET outperforms FLONET, with a success rate improvement of 68{\%} and 123{\%} in flowchart grounded response generation and zero-shot flowchart grounded response generation tasks respectively.
null
null
10.18653/v1/2022.emnlp-main.739
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,869
inproceedings
schmidt-etal-2022-slicer
{SLICER}: Sliced Fine-Tuning for Low-Resource Cross-Lingual Transfer for Named Entity Recognition
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.740/
Schmidt, Fabian David and Vuli{\'c}, Ivan and Glava{\v{s}}, Goran
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10775--10785
Large multilingual language models generally demonstrate impressive results in zero-shot cross-lingual transfer, yet often fail to successfully transfer to low-resource languages, even for token-level prediction tasks like named entity recognition (NER). In this work, we introduce a simple yet highly effective approach for improving zero-shot transfer for NER to low-resource languages. We observe that NER fine-tuning in the source language decontextualizes token representations, i.e., tokens increasingly attend to themselves. This increased reliance on token information itself, we hypothesize, triggers a type of overfitting to properties that NE tokens within the source languages share, but are generally not present in NE mentions of target languages. As a remedy, we propose a simple yet very effective sliced fine-tuning for NER (SLICER) that forces stronger token contextualization in the Transformer: we divide the transformed token representations and classifier into disjoint slices that are then independently classified during training. We evaluate SLICER on two standard benchmarks for NER that involve low-resource languages, WikiANN and MasakhaNER, and show that it (i) indeed reduces decontextualization (i.e., extent to which NE tokens attend to themselves), consequently (ii) yielding consistent transfer gains, especially prominent for low-resource target languages distant from the source language.
null
null
10.18653/v1/2022.emnlp-main.740
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,870
inproceedings
ge-etal-2022-edgeformer
{E}dge{F}ormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.741/
Ge, Tao and Chen, Si-Qing and Wei, Furu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10786--10798
We introduce EdgeFormer {--} a parameter-efficient Transformer for on-device seq2seq generation under the strict computation and memory constraints. Compared with the previous parameter-efficient Transformers, EdgeFormer applies two novel principles for cost-effective parameterization, allowing it to perform better given the same parameter budget; moreover, EdgeFormer is further enhanced by layer adaptation innovation that is proposed for improving the network with shared layers.Extensive experiments show EdgeFormer can effectively outperform previous parameter-efficient Transformer baselines and achieve competitive results under both the computation and memory constraints. Given the promising results, we release EdgeLM {--} the pretrained version of EdgeFormer, which is the first publicly available pretrained on-device seq2seq model that can be easily fine-tuned for seq2seq tasks with strong results, facilitating on-device seq2seq generation in practice.
null
null
10.18653/v1/2022.emnlp-main.741
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,871
inproceedings
chen-etal-2022-end
End-to-End Unsupervised Vision-and-Language Pre-training with Referring Expression Matching
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.742/
Chen, Chi and Li, Peng and Sun, Maosong and Liu, Yang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10799--10810
Recently there has been an emerging interest in unsupervised vision-and-language pre-training (VLP) that learns multimodal representations without parallel image-caption data. These pioneering works significantly reduce the cost of VLP on data collection and achieve promising results compared to supervised VLP. However, existing unsupervised VLP methods take as input pre-extracted region-based visual features from external object detectors, which both limits flexibility and reduces computational efficiency. In this paper, we explore end-to-end unsupervised VLP with a vision encoder to directly encode images. The vision encoder is pre-trained on image-only data and jointly optimized during multimodal pre-training. To further enhance the learned cross-modal features, we propose a novel pre-training task that predicts which patches contain an object referred to in natural language from the encoded visual features. Extensive experiments on four vision-and-language tasks show that our approach outperforms previous unsupervised VLP methods and obtains new state-of-the-art results.
null
null
10.18653/v1/2022.emnlp-main.742
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,872
inproceedings
aglionby-teufel-2022-faithful
Faithful Knowledge Graph Explanations in Commonsense Question Answering
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.743/
Aglionby, Guy and Teufel, Simone
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10811--10817
Knowledge graphs are commonly used as sources of information in commonsense question answering, and can also be used to express explanations for the model`s answer choice. A common way of incorporating facts from the graph is to encode them separately from the question, and then combine the two representations to select an answer. In this paper, we argue that highly faithful graph-based explanations cannot be extracted from existing models of this type. Such explanations will not include reasoning done by the transformer encoding the question, so will be incomplete. We confirm this theory with a novel proxy measure for faithfulness and propose two architecture changes to address the problem. Our findings suggest a path forward for developing architectures for faithful graph-based explanations.
null
null
10.18653/v1/2022.emnlp-main.743
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,873
inproceedings
jeong-etal-2022-kold
{KOLD}: {K}orean Offensive Language Dataset
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.744/
Jeong, Younghoon and Oh, Juhyun and Lee, Jongwon and Ahn, Jaimeen and Moon, Jihyung and Park, Sungjoon and Oh, Alice
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10818--10833
Recent directions for offensive language detection are hierarchical modeling, identifying the type and the target of offensive language, and interpretability with offensive span annotation and prediction. These improvements are focused on English and do not transfer well to other languages because of cultural and linguistic differences. In this paper, we present the Korean Offensive Language Dataset (KOLD) comprising 40,429 comments, which are annotated hierarchically with the type and the target of offensive language, accompanied by annotations of the corresponding text spans. We collect the comments from NAVER news and YouTube platform and provide the titles of the articles and videos as the context information for the annotation process. We use these annotated comments as training data for Korean BERT and RoBERTa models and find that they are effective at offensiveness detection, target classification, and target span detection while having room for improvement for target group classification and offensive span detection. We discover that the target group distribution differs drastically from the existing English datasets, and observe that providing the context information improves the model performance in offensiveness detection (+0.3), target classification (+1.5), and target group classification (+13.1). We publicly release the dataset and baseline models.
null
null
10.18653/v1/2022.emnlp-main.744
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,874
inproceedings
li-etal-2022-evade
Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text Generation via Concentrating Attention
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.745/
Li, Wenhao and Yi, Xiaoyuan and Hu, Jinyi and Sun, Maosong and Xie, Xing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10834--10858
Recently, powerful Transformer architectures have proven superior in generating high-quality sentences. Nevertheless, these models tend to produce dull high-frequency phrases, severely hurting the diversity and novelty of generated text. In this work, we dig into the intrinsic mechanism of this problem and found that sparser attention values in Transformer could improve diversity. To understand such a phenomenon, we first conduct both empirical and theoretical analysis and then attribute it to representation degeneration caused by the attentive mixture of the hidden states during training. We term this process the Trap of Mediocrity. To escape from such a trap, we introduce a novel attention regularization loss to control the sharpness of the attention distribution, which is transparent to model structures and can be easily implemented within 20 lines of python code. We prove that this method could be mathematically regarded as learning a Bayesian approximation of posterior attention. Experiments show that our method improved the diversity and novelty of the generated text while maintaining comparable quality on a variety of conditional and unconditional generation tasks.
null
null
10.18653/v1/2022.emnlp-main.745
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,875
inproceedings
weissweiler-etal-2022-better
The better your Syntax, the better your Semantics? Probing Pretrained Language Models for the {E}nglish Comparative Correlative
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.746/
Weissweiler, Leonie and Hofmann, Valentin and K{\"oksal, Abdullatif and Sch{\"utze, Hinrich
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10859--10882
Construction Grammar (CxG) is a paradigm from cognitive linguistics emphasising the connection between syntax and semantics. Rather than rules that operate on lexical items, it posits constructions as the central building blocks of language, i.e., linguistic units of different granularity that combine syntax and semantics. As a first step towards assessing the compatibility of CxG with the syntactic and semantic knowledge demonstrated by state-of-the-art pretrained language models (PLMs), we present an investigation of their capability to classify and understand one of the most commonly studied constructions, the English comparative correlative (CC). We conduct experiments examining the classification accuracy of a syntactic probe on the one hand and the models' behaviour in a semantic application task on the other, with BERT, RoBERTa, and DeBERTa as the example PLMs. Our results show that all three investigated PLMs are able to recognise the structure of the CC but fail to use its meaning. While human-like performance of PLMs on many NLP tasks has been alleged, this indicates that PLMs still suffer from substantial shortcomings in central domains of linguistic knowledge.
null
null
10.18653/v1/2022.emnlp-main.746
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,876
inproceedings
fei-etal-2022-proofinfer
{P}roof{I}nfer: Generating Proof via Iterative Hierarchical Inference
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.747/
Fei, Zichu and Zhang, Qi and Zhou, Xin and Gui, Tao and Huang, Xuanjing
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10883--10892
Proof generation focuses on deductive reasoning: given a hypothesis and a set of theories, including some supporting facts and logical rules expressed in natural language, the model generates a proof tree indicating how to deduce the hypothesis from given theories.Current models with state-of-the-art performance employ the stepwise method that adds an individual node to the proof step-by-step.However, these methods actually focus on generating several proof paths rather than a whole tree.During generation, they focus on the most relevant areas of the currently generated node while neglecting the rest of the proof tree. To address this problem, we propose ProofInfer, which generates the proof tree via iterative hierarchical inference.At each step, ProofInfer adds the entire layer to the proof, where all nodes in this layer are generated simultaneously. Since the conventional autoregressive generation architecture cannot simultaneously predict multiple nodes, ProofInfer employs text-to-text paradigm.To this end, we propose a divide-and-conquer algorithm to encode the proof tree as the plain text without losing structure information.Experimental results show that ProofInfer significantly improves performance on several widely-used datasets.In addition, ProofInfer still performs well with data-limited, achieving comparable performance to the state-of-the-art model with about 40{\%} of the training data.
null
null
10.18653/v1/2022.emnlp-main.747
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,877
inproceedings
mukherjee-etal-2022-ectsum
{ECTS}um: A New Benchmark Dataset For Bullet Point Summarization of Long Earnings Call Transcripts
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.748/
Mukherjee, Rajdeep and Bohra, Abhinav and Banerjee, Akash and Sharma, Soumya and Hegde, Manjunath and Shaikh, Afreen and Shrivastava, Shivani and Dasgupta, Koustuv and Ganguly, Niloy and Ghosh, Saptarshi and Goyal, Pawan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10893--10906
Despite tremendous progress in automatic summarization, state-of-the-art methods are predominantly trained to excel in summarizing short newswire articles, or documents with strong layout biases such as scientific articles or government reports. Efficient techniques to summarize financial documents, discussing facts and figures, have largely been unexplored, majorly due to the unavailability of suitable datasets. In this work, we present ECTSum, a new dataset with transcripts of earnings calls (ECTs), hosted by publicly traded companies, as documents, and experts-written short telegram-style bullet point summaries derived from corresponding Reuters articles. ECTs are long unstructured documents without any prescribed length limit or format. We benchmark our dataset with state-of-the-art summarization methods across various metrics evaluating the content quality and factual consistency of the generated summaries. Finally, we present a simple yet effective approach, ECT-BPS, to generate a set of bullet points that precisely capture the important facts discussed in the calls.
null
null
10.18653/v1/2022.emnlp-main.748
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,878
inproceedings
bai-etal-2022-cross
Cross-domain Generalization for {AMR} Parsing
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.749/
Bai, Xuefeng and Yang, Sen and Cui, Leyang and Song, Linfeng and Zhang, Yue
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10907--10921
Abstract Meaning Representation (AMR) parsing aims to predict an AMR graph from textual input. Recently, there has been notable growth in AMR parsing performance. However, most existing work focuses on improving the performance in the specific domain, ignoring the potential domain dependence of AMR parsing systems. To address this, we extensively evaluate five representative AMR parsers on five domains and analyze challenges to cross-domain AMR parsing. We observe that challenges to cross-domain AMR parsing mainly arise from the distribution shift of words and AMR concepts. Based on our observation, we investigate two approaches to reduce the domain distribution divergence of text and AMR features, respectively. Experimental results on two out-of-domain test sets show the superiority of our method.
null
null
10.18653/v1/2022.emnlp-main.749
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,879
inproceedings
mao-etal-2022-citesum
{C}ite{S}um: Citation Text-guided Scientific Extreme Summarization and Domain Adaptation with Limited Supervision
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.750/
Mao, Yuning and Zhong, Ming and Han, Jiawei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10922--10935
Scientific extreme summarization (TLDR) aims to form ultra-short summaries of scientific papers. Previous efforts on curating scientific TLDR datasets failed to scale up due to the heavy human annotation and domain expertise required. In this paper, we propose a simple yet effective approach to automatically extracting TLDR summaries for scientific papers from their citation texts. Based on the proposed approach, we create a new benchmark CiteSum without human annotation, which is around 30 times larger than the previous human-curated dataset SciTLDR. We conduct a comprehensive analysis of CiteSum, examining its data characteristics and establishing strong baselines. We further demonstrate the usefulness of CiteSum by adapting models pre-trained on CiteSum (named CITES) to new tasks and domains with limited supervision. For scientific extreme summarization, CITES outperforms most fully-supervised methods on SciTLDR without any fine-tuning and obtains state-of-the-art results with only 128 examples. For news extreme summarization, CITES achieves significant gains on XSum over its base model (not pre-trained on CiteSum), e.g., +7.2 ROUGE-1 zero-shot performance and state-of-the-art few-shot performance. For news headline generation, CITES performs the best among unsupervised and zero-shot methods on Gigaword.
null
null
10.18653/v1/2022.emnlp-main.750
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,880
inproceedings
albalak-etal-2022-feta
{FETA}: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.751/
Albalak, Alon and Tuan, Yi-Lin and Jandaghi, Pegah and Pryor, Connor and Yoffe, Luke and Ramachandran, Deepak and Getoor, Lise and Pujara, Jay and Wang, William Yang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10936--10953
Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models. Dialogue understanding encompasses many diverse tasks, yet task transfer has not been thoroughly studied in conversational AI. This work explores conversational task transfer by introducing FETA: a benchmark for FEw-sample TAsk transfer in open-domain dialogue.FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer; task transfer without domain adaptation. We utilize three popular language models and three learning algorithms to analyze the transferability between 132 source-target task pairs and create a baseline for future work.We run experiments in the single- and multi-source settings and report valuable findings, e.g., most performance trends are model-specific, and span extraction and multiple-choice tasks benefit the most from task transfer.In addition to task transfer, FETA can be a valuable resource for future research into the efficiency and generalizability of pre-training datasets and model architectures, as well as for learning settings such as continual and multitask learning.
null
null
10.18653/v1/2022.emnlp-main.751
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,881
inproceedings
romero-razniewski-2022-children
Do Children Texts Hold The Key To Commonsense Knowledge?
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.752/
Romero, Julien and Razniewski, Simon
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10954--10959
Compiling comprehensive repositories of commonsense knowledge is a long-standing problem in AI. Many concerns revolve around the issue of reporting bias, i.e., that frequency in text sources is not a good proxy for relevance or truth. This paper explores whether children`s texts hold the key to commonsense knowledge compilation, based on the hypothesis that such content makes fewer assumptions on the reader`s knowledge, and therefore spells out commonsense more explicitly. An analysis with several corpora shows that children`s texts indeed contain much more, and more typical commonsense assertions. Moreover, experiments show that this advantage can be leveraged in popular language-model-based commonsense knowledge extraction settings, where task-unspecific fine-tuning on small amounts of children texts (childBERT) already yields significant improvements. This provides a refreshing perspective different from the common trend of deriving progress from ever larger models and corpora.
null
null
10.18653/v1/2022.emnlp-main.752
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,882
inproceedings
deutsch-etal-2022-limitations
On the Limitations of Reference-Free Evaluations of Generated Text
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.753/
Deutsch, Daniel and Dror, Rotem and Roth, Dan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10960--10977
There is significant interest in developing evaluation metrics which accurately estimate the quality of generated text without the aid of a human-written reference text, which can be time consuming and expensive to collect or entirely unavailable in online applications. However, in this work, we demonstrate that these reference-free metrics are inherently biased and limited in their ability to evaluate generated text, and we argue that they should not be used to measure progress on tasks like machine translation or summarization. We show how reference-free metrics are equivalent to using one generation model to evaluate another, which has several limitations: (1) the metrics can be optimized at test time to find the approximate best-possible output, (2) they are inherently biased toward models which are more similar to their own, and (3) they can be biased against higher-quality outputs, including those written by humans. Therefore, we recommend that reference-free metrics should be used as diagnostic tools for analyzing and understanding model behavior instead of measures of how well models perform a task, in which the goal is to achieve as high of a score as possible.
null
null
10.18653/v1/2022.emnlp-main.753
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,883
inproceedings
eikema-aziz-2022-sampling
Sampling-Based Approximations to Minimum {B}ayes Risk Decoding for Neural Machine Translation
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.754/
Eikema, Bryan and Aziz, Wilker
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10978--10993
In NMT we search for the mode of the model distribution to form predictions. The mode and other high-probability translations found by beam search have been shown to often be inadequate in a number of ways. This prevents improving translation quality through better search, as these idiosyncratic translations end up selected by the decoding algorithm, a problem known as the beam search curse. Recently, an approximation to minimum Bayes risk (MBR) decoding has been proposed as an alternative decision rule that would likely not suffer from the same problems. We analyse this approximation and establish that it has no equivalent to the beam search curse. We then design approximations that decouple the cost of exploration from the cost of robust estimation of expected utility. This allows for much larger hypothesis spaces, which we show to be beneficial. We also show that mode-seeking strategies can aid in constructing compact sets of promising hypotheses and that MBR is effective in identifying good translations in them. We conduct experiments on three language pairs varying in amounts of resources available: English into and from German, Romanian, and Nepali.
null
null
10.18653/v1/2022.emnlp-main.754
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,884
inproceedings
aggarwal-etal-2022-indicxnli
{I}ndic{XNLI}: Evaluating Multilingual Inference for {I}ndian Languages
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.755/
Aggarwal, Divyanshu and Gupta, Vivek and Kunchukuttan, Anoop
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
10994--11006
While Indic NLP has made rapid advances recently in terms of the availability of corpora and pre-trained models, benchmark datasets on standard NLU tasks are limited. To this end, we introduce INDICXNLI, an NLI dataset for 11 Indic languages. It has been created by high-quality machine translation of the original English XNLI dataset and our analysis attests to the quality of INDICXNLI. By finetuning different pre-trained LMs on this INDICXNLI, we analyze various cross-lingual transfer techniques with respect to the impact of the choice of language models, languages, multi-linguality, mix-language input, etc. These experiments provide us with useful insights into the behaviour of pre-trained models for a diverse set of languages.
null
null
10.18653/v1/2022.emnlp-main.755
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,885
inproceedings
varshney-baral-2022-model
Model Cascading: Towards Jointly Improving Efficiency and Accuracy of {NLP} Systems
Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue
dec
2022
Abu Dhabi, United Arab Emirates
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-main.756/
Varshney, Neeraj and Baral, Chitta
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
11007--11021
Do all instances need inference through the big models for a correct prediction? Perhaps not; some instances are easy and can be answered correctly by even small capacity models. This provides opportunities for improving the computational efficiency of systems. In this work, we present an explorative study on {\textquoteleft}model cascading', a simple technique that utilizes a collection of models of varying capacities to accurately yet efficiently output predictions. Through comprehensive experiments in multiple task settings that differ in the number of models available for cascading (K value), we show that cascading improves both the computational efficiency and the prediction accuracy. For instance, in K=3 setting, cascading saves up to 88.93{\%} computation cost and consistently achieves superior prediction accuracy with an improvement of up to 2.18{\%}. We also study the impact of introducing additional models in the cascade and show that it further increases the efficiency improvements. Finally, we hope that our work will facilitate development of efficient NLP systems making their widespread adoption in real-world applications possible.
null
null
10.18653/v1/2022.emnlp-main.756
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,886