entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | ray-chowdhury-etal-2022-kpdrop | {KPDROP}: Improving Absent Keyphrase Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.357/ | Ray Chowdhury, Jishnu and Park, Seo Yeon and Kundu, Tuhin and Caragea, Cornelia | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4853--4870 | Keyphrase generation is the task of generating phrases (keyphrases) that summarize the main topics of a given document. Keyphrases can be either present or absent from the given document. While the extraction of present keyphrases has received much attention in the past, only recently a stronger focus has been placed on the generation of absent keyphrases. However, generating absent keyphrases is challenging; even the best methods show only a modest degree of success. In this paper, we propose a model-agnostic approach called keyphrase dropout (or KPDrop) to improve absent keyphrase generation. In this approach, we randomly drop present keyphrases from the document and turn them into artificial absent keyphrases during training. We test our approach extensively and show that it consistently improves the absent performance of strong baselines in both supervised and resource-constrained semi-supervised settings. | null | null | 10.18653/v1/2022.findings-emnlp.357 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,869 |
inproceedings | bostrom-etal-2022-natural | Natural Language Deduction through Search over Statement Compositions | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.358/ | Bostrom, Kaj and Sprague, Zayne and Chaudhuri, Swarat and Durrett, Greg | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4871--4883 | In settings from fact-checking to question answering, we frequently want to know whether a collection of evidence (premises) entails a hypothesis. Existing methods primarily focus on the end-to-end discriminative version of this task, but less work has treated the generative version in which a model searches over the space of statements entailed by the premises to constructively derive the hypothesis. We propose a system for doing this kind of deductive reasoning in natural language by decomposing the task into separate steps coordinated by a search procedure, producing a tree of intermediate conclusions that faithfully reflects the system`s reasoning process. Our experiments on the EntailmentBank dataset (Dalvi et al., 2021) demonstrate that the proposed system can successfully prove true statements while rejecting false ones. Moreover, it produces natural language explanations with a 17{\%} absolute higher step validity than those produced by an end-to-end T5 model. | null | null | 10.18653/v1/2022.findings-emnlp.358 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,870 |
inproceedings | xu-etal-2022-endex | {E}n{D}ex: Evaluation of Dialogue Engagingness at Scale | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.359/ | Xu, Guangxuan and Liu, Ruibo and Harel-Canada, Fabrice and Chandra, Nischal Reddy and Peng, Nanyun | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4884--4893 | We propose EnDex, the first human-reaction based model to evaluate dialogue engagingness. EnDex is trained on 80k Reddit-based Engagement Dataset (RED) curated using a novel distant-supervision framework. Engagingness is a key measure that captures high-level quality of AI dialogue systems and closely reflects actual user experience. However, data shortage, plus the abstract and extensive definition of engagingness makes it challenging to develop an automatic metric. Our work departs from mainstream approaches that use synthetic negative examples to train binary classifiers, and instead, proposes a solution using distant-supervision from human-reaction feedback. To support the soundness of our EnDex metric, we offer a theoretical foundation for engagement, an extensive ablation study, and empirical evidence of high correlation on five engagingness related datasets. We will release code, off-the-shelf EnDex model, and a large-scale dataset upon paper publication to facilitate future research. | null | null | 10.18653/v1/2022.findings-emnlp.359 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,871 |
inproceedings | mekala-etal-2022-lops | {LOPS}: Learning Order Inspired Pseudo-Label Selection for Weakly Supervised Text Classification | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.360/ | Mekala, Dheeraj and Dong, Chengyu and Shang, Jingbo | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4894--4908 | Weakly supervised text classification methods typically train a deep neural classifier based on pseudo-labels. The quality of pseudo-labels is crucial to final performance but they are inevitably noisy due to their heuristic nature, so selecting the correct ones has a huge potential for performance boost. One straightforward solution is to select samples based on the softmax probability scores in the neural classifier corresponding to their pseudo-labels. However, we show through our experiments that such solutions are ineffective and unstable due to the erroneously high-confidence predictions from poorly calibrated models. Recent studies on the memorization effects of deep neural models suggest that these models first memorize training samples with clean labels and then those with noisy labels. Inspired by this observation, we propose a novel pseudo-label selection method LOPS that takes learning order of samples into consideration. We hypothesize that the learning order reflects the probability of wrong annotation in terms of ranking, and therefore, propose to select the samples that are learnt earlier. LOPS can be viewed as a strong performance-boost plug-in to most existing weakly-supervised text classification methods, as confirmed in extensive experiments on four real-world datasets. | null | null | 10.18653/v1/2022.findings-emnlp.360 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,872 |
inproceedings | na-etal-2022-train | Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.361/ | Na, Clara and Mehta, Sanket Vaibhav and Strubell, Emma | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4909--4936 | Model compression by way of parameter pruning, quantization, or distillation has recently gained popularity as an approach for reducing the computational requirements of modern deep neural network models for NLP. Inspired by prior works suggesting a connection between simpler, more generalizable models and those that lie within wider loss basins, we hypothesize that optimizing for flat minima should lead to simpler parameterizations and thus more compressible models. We propose to combine sharpness-aware minimization (SAM) with various task-specific model compression methods, including iterative magnitude pruning (IMP), structured pruning with a distillation objective, and post-training dynamic quantization. Empirically, we show that optimizing for flatter minima consistently leads to greater compressibility of parameters compared to vanilla Adam when fine-tuning BERT models, with little to no loss in accuracy on the GLUE text classification and SQuAD question answering benchmarks. Moreover, SAM finds superior winning tickets during IMP that 1) are amenable to vanilla Adam optimization, and 2) transfer more effectively across tasks. | null | null | 10.18653/v1/2022.findings-emnlp.361 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,873 |
inproceedings | zhang-etal-2022-structural-contrastive | Structural Contrastive Representation Learning for Zero-shot Multi-label Text Classification | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.362/ | Zhang, Tianyi and Xu, Zhaozhuo and Medini, Tharun and Shrivastava, Anshumali | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4937--4947 | Zero-shot multi-label text classification (ZMTC) is a fundamental task in natural language processing with applications in the cold start problem of recommendation systems. Ideally, one would learn an expressive representation of both input text and label features so that ZMTC is transformed into a nearest neighbor search problem. However, the existing representation learning approaches for ZMTC struggle with accuracy as well as poor training efficiency. Firstly, the input text is structural, consisting of both short title sentences and long content paragraphs. It is challenging to model the correlation between short label descriptions and long structural input documents. Secondly, the enormous label space in ZMTC forces the existing approaches to perform multi-stage learning with label engineering. As a result, the training overhead is significant. In this paper, we address both problems by introducing an end-to-end structural contrastive representation learning approach. We propose a randomized text segmentation (RTS) technique to generate high-quality contrastive pairs. This RTS technique allows us to model title-content correlation. Additionally, we simplify the multi-stage ZMTC learning strategy by avoiding label engineering. Extensive experiments demonstrate that our approach leads to up to 2.33{\%} improvement in precision@1 and 5.94x speedup in training time on publicly available datasets. Our code is available publicly. | null | null | 10.18653/v1/2022.findings-emnlp.362 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,874 |
inproceedings | lu-etal-2022-improving | Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.363/ | Lu, Peng and Kobyzev, Ivan and Rezagholizadeh, Mehdi and Rashid, Ahmad and Ghodsi, Ali and Langlais, Phillippe | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4948--4954 | Knowledge Distillation (KD) is a commonly used technique for improving the generalization of compact Pre-trained Language Models (PLMs) on downstream tasks. However, such methods impose the additional burden of training a separate teacher model for every new dataset.Alternatively, one may directly work on the improvement of the optimization procedure of the compact model towards better generalization. Recent works observe that the flatness of the local minimum correlates well with better generalization.In this work, we adapt Stochastic Weight Averaging (SWA), a method encouraging convergence to a flatter minimum, to fine-tuning PLMs. We conduct extensive experiments on various NLP tasks (text classification, question answering, and generation) and different model architectures and demonstrate that our adaptation improves the generalization without extra computation cost. Moreover, we observe that this simple optimization technique is able to outperform the state-of-the-art KD methods for compact models. | null | null | 10.18653/v1/2022.findings-emnlp.363 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,875 |
inproceedings | towle-zhou-2022-learn | Learn What Is Possible, Then Choose What Is Best: Disentangling One-To-Many Relations in Language Through Text-based Games | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.364/ | Towle, Benjamin and Zhou, Ke | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4955--4965 | Language models pre-trained on large self-supervised corpora, followed by task-specific fine-tuning has become the dominant paradigm in NLP. These pre-training datasets often have a one-to-many structure{---}e.g. in dialogue there are many valid responses for a given context. However, only some of these responses will be desirable in our downstream task. This raises the question of how we should train the model such that it can emulate the desirable behaviours, but not the undesirable ones. Current approaches train in a one-to-one setup{---}only a single target response is given for a single dialogue context{---}leading to models only learning to predict the average response, while ignoring the full range of possible responses. Using text-based games as a testbed, our approach, PASA, uses discrete latent variables to capture the range of different behaviours represented in our larger pre-training dataset. We then use knowledge distillation to distil the posterior probability distribution into a student model. This probability distribution is far richer than learning from only the hard targets of the dataset, and thus allows the student model to benefit from the richer range of actions the teacher model has learned. Results show up to 49{\%} empirical improvement over the previous state-of-the-art model on the Jericho Walkthroughs dataset. | null | null | 10.18653/v1/2022.findings-emnlp.364 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,876 |
inproceedings | gupta-etal-2022-structurally | Structurally Diverse Sampling for Sample-Efficient Training and Comprehensive Evaluation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.365/ | Gupta, Shivanshu and Singh, Sameer and Gardner, Matt | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4966--4979 | A growing body of research has demonstrated the inability of NLP models to generalize compositionally and has tried to alleviate it through specialized architectures, training schemes, and data augmentation, among other approaches. In this work, we study a different approach: training on instances with diverse structures. We propose a model-agnostic algorithm for subsampling such sets of instances from a labeled instance pool with structured outputs. Evaluating on both compositional template splits and traditional IID splits of 5 semantic parsing datasets of varying complexity, we show that structurally diverse training using our algorithm leads to comparable or better generalization than prior algorithms in 9 out of 10 dataset-split type pairs. In general, we find structural diversity to consistently improve sample efficiency compared to random train sets. Moreover, we show that structurally diverse sampling yields comprehensive test sets that are a lot more challenging than IID test sets. Finally, we provide two explanations for improved generalization from diverse train sets: 1) improved coverage of output substructures, and 2) a reduction in spurious correlations between these substructures. | null | null | 10.18653/v1/2022.findings-emnlp.365 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,877 |
inproceedings | zhong-etal-2022-unsupervised | Unsupervised Multi-Granularity Summarization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.366/ | Zhong, Ming and Liu, Yang and Ge, Suyu and Mao, Yuning and Jiao, Yizhu and Zhang, Xingxing and Xu, Yichong and Zhu, Chenguang and Zeng, Michael and Han, Jiawei | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4980--4995 | Text summarization is a user-preference based task, i.e., for one document, users often have different priorities for the summary. As a key aspect of customization in summarization, granularity is used to measure the semantic coverage between the summary and source document. However, developing systems that can generate summaries with customizable semantic coverage is still an under-explored topic. In this paper, we propose the first unsupervised multi-granularity summarization framework, GranuSum. We take events as the basic semantic units of the source documents and propose to rank these events by their salience. We also develop a model to summarize input documents with given events as anchors and hints. By inputting different numbers of events, GranuSum is capable of producing multi-granular summaries in an unsupervised manner. Meanwhile, we annotate a new benchmark GranuDUC that contains multiple summaries at different granularities for each document cluster. Experimental results confirm the substantial superiority of GranuSum on multi-granularity summarization over strong baselines. Furthermore, by exploiting the event information, GranuSum also exhibits state-of-the-art performance under the conventional unsupervised abstractive setting. | null | null | 10.18653/v1/2022.findings-emnlp.366 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,878 |
inproceedings | lee-berg-kirkpatrick-2022-helo | {H}e{L}o: Learning-Free Lookahead Decoding for Conversation Infilling | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.367/ | Lee, Ivan and Berg-Kirkpatrick, Taylor | Findings of the Association for Computational Linguistics: EMNLP 2022 | 4996--5008 | We propose Heuristic Guided Lookahead Decoding (HeLo), a novel decoding strategy for conversation infilling. Conversation infilling aims to generate a seamless bridge of utterances connecting a given pair of source and target utterances. HeLo does not require fine-tuning or extra models {--} only the generating model itself. Instead, HeLo leverages a greedy lookahead phase before committing to any token. The HeLo framework is simple and can augment conventional decoding strategies paired with any autoregressive language model. Smooth transitions between utterances are encouraged with an annealing schedule. Our experiments show HeLo outperforms several baselines when evaluated with both automatic and human evaluation metrics, which, we argue, are appropriate for the task. | null | null | 10.18653/v1/2022.findings-emnlp.367 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,879 |
inproceedings | hayet-etal-2022-invernet | Invernet: An Inversion Attack Framework to Infer Fine-Tuning Datasets through Word Embeddings | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.368/ | Hayet, Ishrak and Yao, Zijun and Luo, Bo | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5009--5018 | Word embedding aims to learn the dense representation of words and has become a regular input preparation in many NLP tasks. Due to the data and computation intensive nature of learning embeddings from scratch, a more affordable way is to borrow the pretrained embedding available in public and fine-tune the embedding through a domain specific downstream dataset. A privacy concern can arise if a malicious owner of the pretrained embedding gets access to the fine-tuned embedding and tries to infer the critical information from the downstream datasets. In this study, we propose a novel embedding inversion framework called Invernet that materializes the privacy concern by inferring the context distribution in the downstream dataset, which can lead to key information breach. With extensive experimental studies on two real-world news datasets: Antonio Gulli`s News and New York Times, we validate the feasibility of proposed privacy attack and demonstrate the effectiveness of Invernet on inferring downstream datasets based on multiple word embedding methods. | null | null | 10.18653/v1/2022.findings-emnlp.368 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,880 |
inproceedings | bruno-roth-2022-lawngnli | {L}awng{NLI}: A Long-Premise Benchmark for In-Domain Generalization from Short to Long Contexts and for Implication-Based Retrieval | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.369/ | Bruno, William and Roth, Dan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5019--5043 | Natural language inference has trended toward studying contexts beyond the sentence level. An important application area is law: past cases often do not foretell how they apply to new situations and implications must be inferred. This paper introduces LawngNLI, constructed from U.S. legal opinions with automatic labels with high human-validated accuracy. Premises are long and multigranular. Experiments show two use cases. First, LawngNLI can benchmark for in-domain generalization from short to long contexts. It has remained unclear if large-scale long-premise NLI datasets actually need to be constructed: near-top performance on long premises could be achievable by fine-tuning using short premises. Without multigranularity, benchmarks cannot distinguish lack of fine-tuning on long premises versus domain shift between short and long datasets. In contrast, our long and short premises share the same examples and domain. Models fine-tuned using several past NLI datasets and/or our short premises fall short of top performance on our long premises. So for at least certain domains (such as ours), large-scale long-premise datasets are needed. Second, LawngNLI can benchmark for implication-based retrieval. Queries are entailed or contradicted by target documents, allowing users to move between arguments and evidence. Leading retrieval models perform reasonably zero shot on a LawngNLI-derived retrieval task. We compare different systems for re-ranking, including lexical overlap and cross-encoders fine-tuned using a modified LawngNLI or past NLI datasets. LawngNLI can train and test systems for implication-based case retrieval and argumentation. | null | null | 10.18653/v1/2022.findings-emnlp.369 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,881 |
inproceedings | zhao-etal-2022-distillation | Distillation-Resistant Watermarking for Model Protection in {NLP} | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.370/ | Zhao, Xuandong and Li, Lei and Wang, Yu-Xiang | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5044--5055 | How can we protect the intellectual property of trained NLP models? Modern NLP models are prone to stealing by querying and distilling from their publicly exposed APIs. However, existing protection methods such as watermarking only work for images but are not applicable to text. We propose Distillation-Resistant Watermarking (DRW), a novel technique to protect NLP models from being stolen via distillation. DRW protects a model by injecting watermarks into the victim`s prediction probability corresponding to a secret key and is able to detect such a key by probing a suspect model. We prove that a protected model still retains the original accuracy within a certain bound. We evaluate DRW on a diverse set of NLP tasks including text classification, part-of-speech tagging, and named entity recognition. Experiments show that DRW protects the original model and detects stealing suspects at 100{\%} mean average precision for all four tasks while the prior method fails on two. | null | null | 10.18653/v1/2022.findings-emnlp.370 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,882 |
inproceedings | howard-etal-2022-neurocounterfactuals | {N}euro{C}ounterfactuals: Beyond Minimal-Edit Counterfactuals for Richer Data Augmentation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.371/ | Howard, Phillip and Singer, Gadi and Lal, Vasudev and Choi, Yejin and Swayamdipta, Swabha | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5056--5072 | While counterfactual data augmentation offers a promising step towards robust generalization in natural language processing, producing a set of counterfactuals that offer valuable inductive bias for models remains a challenge. Most existing approaches for producing counterfactuals, manual or automated, rely on small perturbations via minimal edits, resulting in simplistic changes. We introduce NeuroCounterfactuals, designed as loose counterfactuals, allowing for larger edits which result in naturalistic generations containing linguistic diversity, while still bearing similarity to the original document. Our novel generative approach bridges the benefits of constrained decoding, with those of language model adaptation for sentiment steering. Training data augmentation with our generations results in both in-domain and out-of-domain improvements for sentiment classification, outperforming even manually curated counterfactuals, under select settings. We further present detailed analyses to show the advantages of NeuroCounterfactuals over approaches involving simple, minimal edits. | null | null | 10.18653/v1/2022.findings-emnlp.371 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,883 |
inproceedings | panda-etal-2022-dont | Don`t Just Clean It, Proxy Clean It: Mitigating Bias by Proxy in Pre-Trained Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.372/ | Panda, Swetasudha and Kobren, Ari and Wick, Michael and Shen, Qinlan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5073--5085 | Transformer-based pre-trained models are known to encode societal biases not only in their contextual representations, but also in downstream predictions when fine-tuned on task-specific data.We present D-Bias, an approach that selectively eliminates stereotypical associations (e.g, co-occurrence statistics) at fine-tuning, such that the model doesn`t learn to excessively rely on those signals.D-Bias attenuates biases from both identity words and frequently co-occurring proxies, which we select using pointwise mutual information.We apply D-Bias to a) occupation classification, and b) toxicity classification and find that our approach substantially reduces downstream biases (e.g. by {\ensuremath{>}} 60{\%} in toxicity classification, for identities that are most frequently flagged as toxic on online platforms).In addition, we show that D-Bias dramatically improves upon scrubbing, i.e., removing only the identity words in question.We also demonstrate that D-Bias easily extends to multiple identities, and achieves competitive performance with two recently proposed debiasing approaches: R-LACE and INLP. | null | null | 10.18653/v1/2022.findings-emnlp.372 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,884 |
inproceedings | valentini-etal-2022-undesirable | The Undesirable Dependence on Frequency of Gender Bias Metrics Based on Word Embeddings | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.373/ | Valentini, Francisco and Rosati, Germ{\'a}n and Fernandez Slezak, Diego and Altszyler, Edgar | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5086--5092 | Numerous works use word embedding-based metrics to quantify societal biases and stereotypes in texts. Recent studies have found that word embeddings can capture semantic similarity but may be affected by word frequency. In this work we study the effect of frequency when measuring female vs. male gender bias with word embedding-based bias quantification methods. We find that Skip-gram with negative sampling and GloVe tend to detect male bias in high frequency words, while GloVe tends to return female bias in low frequency words. We show these behaviors still exist when words are randomly shuffled. This proves that the frequency-based effect observed in unshuffled corpora stems from properties of the metric rather than from word associations. The effect is spurious and problematic since bias metrics should depend exclusively on word co-occurrences and not individual word frequencies. Finally, we compare these results with the ones obtained with an alternative metric based on Pointwise Mutual Information. We find that this metric does not show a clear dependence on frequency, even though it is slightly skewed towards male bias across all frequencies. | null | null | 10.18653/v1/2022.findings-emnlp.373 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,885 |
inproceedings | bastan-etal-2022-bionli | {B}io{NLI}: Generating a Biomedical {NLI} Dataset Using Lexico-semantic Constraints for Adversarial Examples | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.374/ | Bastan, Mohaddeseh and Surdeanu, Mihai and Balasubramanian, Niranjan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5093--5104 | Natural language inference (NLI) is critical in many domains requiring complex decision-making, such as the biomedical domain. We introduce a novel semi-supervised procedure that bootstraps biomedical NLI datasets from positive entailment examples present in abstracts of biomedical publications. We focus on challenging texts where the hypothesis includes mechanistic information such as biochemical interactions between two entities. A key contribution of this work is automating the creation of negative examples that are informative without being simplistic. We generate a range of negative examples using nine strategies that manipulate the structure of the underlying mechanisms both with rules, e.g., flip the roles of the entities in the interaction, and, more importantly, by imposing the perturbed conditions as logical constraints in a neuro-logical decoding system (CITATION).We use this procedure to create a novel dataset for NLI in the biomedical domain, called . The accuracy of neural classifiers on this dataset is in the mid 70s F1, which indicates that this NLI task remains to be solved. Critically, we observe that the performance on the different classes of negative examples varies widely, from 97{\%} F1 on the simple negative examples that change the role of the entities in the hypothesis, to barely better than chance on the negative examples generated using neuro-logic decoding. | null | null | 10.18653/v1/2022.findings-emnlp.374 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,886 |
inproceedings | chu-etal-2022-self | Self-supervised Cross-modal Pretraining for Speech Emotion Recognition and Sentiment Analysis | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.375/ | Chu, Iek-Heng and Chen, Ziyi and Yu, Xinlu and Han, Mei and Xiao, Jing and Chang, Peng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5105--5114 | Multimodal speech emotion recognition (SER) and sentiment analysis (SA) are important techniques for human-computer interaction. Most existing multimodal approaches utilize either shallow cross-modal fusion of pretrained features, or deep cross-modal fusion with raw features. Recently, attempts have been made to fuse pretrained feature representations in a deep fusion manner during fine-tuning stage. However those approaches have not led to improved results, partially due to their relatively simple fusion mechanisms and lack of proper cross-modal pretraining. In this work, leveraging single-modal pretrained models (RoBERTa and HuBERT), we propose a novel deeply-fused audio-text bi-modal transformer with carefully designed cross-modal fusion mechanism and a stage-wise cross-modal pretraining scheme to fully facilitate the cross-modal learning. Our experiment results show that the proposed method achieves state-of-the-art results on the public IEMOCAP emotion and CMU-MOSEI sentiment datasets, exceeding the previous benchmarks by a large margin. | null | null | 10.18653/v1/2022.findings-emnlp.375 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,887 |
inproceedings | li-etal-2022-multimodal | Multimodal Conversation Modelling for Topic Derailment Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.376/ | Li, Zhenhao and Rei, Marek and Specia, Lucia | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5115--5127 | Conversations on social media tend to go off-topic and turn into different and sometimes toxic exchanges. Previous work focuses on analysing textual dialogues that have derailed into toxic content, but the range of derailment types is much broader, including spam or bot content, tangential comments, etc. In addition, existing work disregards conversations that involve visual information (i.e. images or videos), which are prevalent on most platforms. In this paper, we take a broader view of conversation derailment and propose a new challenge: detecting derailment based on the {\textquotedblleft}change of conversation topic{\textquotedblright}, where the topic is defined by an initial post containing both a text and an image. For that, we (i) create the first Multimodal Conversation Derailment (MCD) dataset, and (ii) introduce a new multimodal conversational architecture (MMConv) that utilises visual and conversational contexts to classify comments for derailment. Experiments show that MMConv substantially outperforms previous text-based approaches to detect conversation derailment, as well as general multimodal classifiers. MMConv is also more robust to textual noise, since it relies on richer contextual information. | null | null | 10.18653/v1/2022.findings-emnlp.376 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,888 |
inproceedings | tsvigun-etal-2022-active | Active Learning for Abstractive Text Summarization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.377/ | Tsvigun, Akim and Lysenko, Ivan and Sedashov, Danila and Lazichny, Ivan and Damirov, Eldar and Karlov, Vladimir and Belousov, Artemy and Sanochkin, Leonid and Panov, Maxim and Panchenko, Alexander and Burtsev, Mikhail and Shelmanov, Artem | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5128--5152 | Construction of human-curated annotated datasets for abstractive text summarization (ATS) is very time-consuming and expensive because creating each instance requires a human annotator to read a long document and compose a shorter summary that would preserve the key information relayed by the original document. Active Learning (AL) is a technique developed to reduce the amount of annotation required to achieve a certain level of machine learning model performance. In information extraction and text classification, AL can reduce the amount of labor up to multiple times. Despite its potential for aiding expensive annotation, as far as we know, there were no effective AL query strategies for ATS. This stems from the fact that many AL strategies rely on uncertainty estimation, while as we show in our work, uncertain instances are usually noisy, and selecting them can degrade the model performance compared to passive annotation. We address this problem by proposing the first effective query strategy for AL in ATS based on diversity principles. We show that given a certain annotation budget, using our strategy in AL annotation helps to improve the model performance in terms of ROUGE and consistency scores. Additionally, we analyze the effect of self-learning and show that it can additionally increase the performance of the model. | null | null | 10.18653/v1/2022.findings-emnlp.377 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,889 |
inproceedings | raunak-menezes-2022-finding | Finding Memo: Extractive Memorization in Constrained Sequence Generation Tasks | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.378/ | Raunak, Vikas and Menezes, Arul | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5153--5162 | Memorization presents a challenge for several constrained Natural Language Generation (NLG) tasks such as Neural Machine Translation (NMT), wherein the proclivity of neural models to memorize noisy and atypical samples reacts adversely with the noisy (web crawled) datasets. However, previous studies of memorization in constrained NLG tasks have only focused on counterfactual memorization, linking it to the problem of hallucinations. In this work, we propose a new, inexpensive algorithm for extractive memorization (exact training data generation under insufficient context) in constrained sequence generation tasks and use it to study extractive memorization and its effects in NMT. We demonstrate that extractive memorization poses a serious threat to NMT reliability by qualitatively and quantitatively characterizing the memorized samples as well as the model behavior in their vicinity. Based on empirical observations, we develop a simple algorithm which elicits non-memorized translations of memorized samples from the same model, for a large fraction of such samples. Finally, we show that the proposed algorithm could also be leveraged to mitigate memorization in the model through finetuning. We have released the code to reproduce our results at https://github.com/vyraun/Finding-Memo. | null | null | 10.18653/v1/2022.findings-emnlp.378 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,890 |
inproceedings | raunak-etal-2022-salted | {SALTED}: A Framework for {SA}lient Long-tail Translation Error Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.379/ | Raunak, Vikas and Post, Matt and Menezes, Arul | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5163--5179 | Traditional machine translation (MT) metrics provide an average measure of translation quality that is insensitive to the long tail of behavioral problems. Examples include translation of numbers, physical units, dropped content and hallucinations. These errors, which occur rarely and unpredictably in Neural Machine Translation (NMT), greatly undermine the reliability of state-of-the-art MT systems. Consequently, it is important to have visibility into these problems during model development.Towards this end, we introduce SALTED, a specifications-based framework for behavioral testing of NMT models. At the core of our approach is the use of high-precision detectors that flag errors (or alternatively, verify output correctness) between a source sentence and a system output. These detectors provide fine-grained measurements of long-tail errors, providing a trustworthy view of problems that were previously invisible. We demonstrate that such detectors could be used not just to identify salient long-tail errors in MT systems, but also for higher-recall filtering of the training data, fixing targeted errors with model fine-tuning in NMT and generating novel data for metamorphic testing to elicit further bugs in models. | null | null | 10.18653/v1/2022.findings-emnlp.379 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,891 |
inproceedings | laban-etal-2022-discord | Discord Questions: A Computational Approach To Diversity Analysis in News Coverage | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.380/ | Laban, Philippe and Wu, Chien-Sheng and Murakhovs{'}ka, Lidiya and Chen, Xiang and Xiong, Caiming | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5180--5194 | There are many potential benefits to news readers accessing diverse sources. Modern news aggregators do the hard work of organizing the news, offering readers a plethora of source options, but choosing which source to read remains challenging.We propose a new framework to assist readers in identifying source differences and gaining an understanding of news coverage diversity.The framework is based on the generation of Discord Questions: questions with a diverse answer pool, explicitly illustrating source differences.To assemble a prototype of the framework, we focus on two components: (1) discord question generation, the task of generating questions answered differently by sources, for which we propose an automatic scoring method, and create a model that improves performance from current question generation (QG) methods by 5{\%}, (2) answer consolidation, the task of grouping answers to a question that are semantically similar, for which we collect data and repurpose a method that achieves 81{\%} balanced accuracy on our realistic test set.We illustrate the framework`s feasibility through a prototype interface. Even though model performance at discord QG still lags human performance by more than 15{\%}, generated questions are judged to be more interesting than factoid questions and can reveal differences in the level of detail, sentiment, and reasoning of sources in news coverage. Code is available at https://github.com/Salesforce/discord{\_}questions. | null | null | 10.18653/v1/2022.findings-emnlp.380 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,892 |
inproceedings | barlacchi-etal-2022-focusqa | {F}ocus{QA}: Open-Domain Question Answering with a Context in Focus | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.381/ | Barlacchi, Gianni and Lauriola, Ivano and Moschitti, Alessandro and Del Tredici, Marco and Shen, Xiaoyu and Vu, Thuy and Byrne, Bill and de Gispert, Adri{\`a} | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5195--5208 | We introduce question answering with a cotext in focus, a task that simulates a free interaction with a QA system. The user reads on a screen some information about a topic, and they can follow-up with questions that can be either related or not to the topic; and the answer can be found in the document containing the screen content or from other pages. We call such information context. To study the task, we construct FocusQA, a dataset for answer sentence selection (AS2) with 12,165011unique question/context pairs, and a total of 109,940 answers. To build the dataset, we developed a novel methodology that takes existing questions and pairs them with relevant contexts. To show the benefits of this approach, we present a comparative analysis with a set of questions written by humans after reading the context, showing that our approach greatly helps in eliciting more realistic question/context pairs. Finally, we show that the task poses several challenges for incorporating contextual information. In this respect, we introduce strong baselines for answer sentence selection that outperform the precision of state-of-the-art models for AS2 up to 21.3{\%} absolute points. | null | null | 10.18653/v1/2022.findings-emnlp.381 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,893 |
inproceedings | park-etal-2022-challenges | Challenges and Opportunities in Information Manipulation Detection: An Examination of Wartime {R}ussian Media | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.382/ | Park, Chan Young and Mendelsohn, Julia and Field, Anjalie and Tsvetkov, Yulia | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5209--5235 | NLP research on public opinion manipulation campaigns has primarily focused on detecting overt strategies such as fake news and disinformation. However, information manipulation in the ongoing Russia-Ukraine war exemplifies how governments and media also employ more nuanced strategies. We release a new dataset, VoynaSlov, containing 38M+ posts from Russian media outlets on Twitter and VKontakte, as well as public activity and responses, immediately preceding and during the 2022 Russia-Ukraine war. We apply standard and recently-developed NLP models on VoynaSlov to examine agenda setting, framing, and priming, several strategies underlying information manipulation, and reveal variation across media outlet control, social media platform, and time. Our examination of these media effects and extensive discussion of current approaches' limitations encourage further development of NLP models for understanding information manipulation in emerging crises, as well as other real-world and interdisciplinary tasks. | null | null | 10.18653/v1/2022.findings-emnlp.382 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,894 |
inproceedings | zha-etal-2022-disentangling | Disentangling Task Relations for Few-shot Text Classification via Self-Supervised Hierarchical Task Clustering | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.383/ | Zha, Juan and Li, Zheng and Wei, Ying and Zhang, Yu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5236--5247 | Few-Shot Text Classification (FSTC) imitates humans to learn a new text classifier efficiently with only few examples, by leveraging prior knowledge from historical tasks. However, most prior works assume that all the tasks are sampled from a single data source, which cannot adapt to real-world scenarios where tasks are heterogeneous and lie in different distributions. As such, existing methods may suffer from their globally knowledge-shared mechanisms to handle the task heterogeneity. On the other hand, inherent task relationships are not explicitly captured, making task knowledge unorganized and hard to transfer to new tasks. Thus, we explore a new FSTC setting where tasks can come from a diverse range of data sources. To address the task heterogeneity, we propose a self-supervised hierarchical task clustering (SS-HTC) method. SS-HTC not only customizes the cluster-specific knowledge by dynamically organizing heterogeneous tasks into different clusters in hierarchical levels but also disentangles the underlying relations between tasks to improve the interpretability. Empirically, extensive experiments on five public FSTC benchmark datasets demonstrate the effectiveness of SS-HTC. | null | null | 10.18653/v1/2022.findings-emnlp.383 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,895 |
inproceedings | shi-etal-2022-xricl | {XRICL}: Cross-lingual Retrieval-Augmented In-Context Learning for Cross-lingual Text-to-{SQL} Semantic Parsing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.384/ | Shi, Peng and Zhang, Rui and Bai, He and Lin, Jimmy | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5248--5259 | In-context learning using large language models has recently shown surprising results for semantic parsing tasks such as Text-to-SQL translation.Prompting GPT-3 or Codex using several examples of question-SQL pairs can produce excellent results, comparable to state-of-the-art finetuning-based models.However, existing work primarily focuses on English datasets, and it is unknown whether large language models can serve as competitive semantic parsers for other languages.To bridge this gap, our work focuses on cross-lingual Text-to-SQL semantic parsing for translating non-English utterances into SQL queries based on an English schema.We consider a zero-shot transfer learning setting with the assumption that we do not have any labeled examples in the target language (but have annotated examples in English).This work introduces the XRICL framework, which learns to retrieve relevant English exemplars for a given query to construct prompts.We also include global translation exemplars for a target language to facilitate the translation process for large language models.To systematically evaluate our model, we construct two new benchmark datasets, XSpider and XKaggle-dbqa, which include questions in Chinese, Vietnamese, Farsi, and Hindi.Our experiments show that XRICL effectively leverages large pre-trained language models to outperform existing baselines.Data and code are publicly available at https://github.com/Impavidity/XRICL. | null | null | 10.18653/v1/2022.findings-emnlp.384 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,896 |
inproceedings | jafari-etal-2022-continuation | Continuation {KD}: Improved Knowledge Distillation through the Lens of Continuation Optimization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.385/ | Jafari, Aref and Kobyzev, Ivan and Rezagholizadeh, Mehdi and Poupart, Pascal and Ghodsi, Ali | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5260--5269 | Knowledge Distillation (KD) has been extensively used for natural language understanding (NLU) tasks to improve a small model`s (a student) generalization by transferring the knowledge from a larger model (a teacher). Although KD methods achieve state-of-the-art performance in numerous settings, they suffer from several problems limiting their performance. It is shown in the literature that the capacity gap between the teacher and the student networks can make KD ineffective. Additionally, existing KD techniques do not mitigate the noise in the teacher`s output: modeling the noisy behaviour of the teacher can distract the student from learning more useful features. We propose a new KD method that addresses these problems and facilitates the training compared to previous techniques. Inspired by continuation optimization, we design a training procedure that optimizes the highly non-convex KD objective by starting with the smoothed version of this objective and making it more complex as the training proceeds. Our method (Continuation-KD) achieves state-of-the-art performance across various compact architectures on NLU (GLUE benchmark) and computer vision tasks (CIFAR-10 and CIFAR-100). | null | null | 10.18653/v1/2022.findings-emnlp.385 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,897 |
inproceedings | dawalatabad-etal-2022-detecting | Detecting Dementia from Long Neuropsychological Interviews | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.386/ | Dawalatabad, Nauman and Gong, Yuan and Khurana, Sameer and Au, Rhoda and Glass, James | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5270--5283 | Neuropsychological exams are commonly used to diagnose various kinds of cognitive impairment. They typically involve a trained examiner who conducts a series of cognitive tests with a subject. In recent years, there has been growing interest in developing machine learning methods to extract speech and language biomarkers from exam recordings to provide automated input for cognitive assessment. Inspired by recent findings suggesting that the examiner`s language can influence cognitive impairment classifications, in this paper, we study the influence of the examiner on automatic dementia identification decisions in real-world neuropsychological exams. To mitigate the influence of the examiner, we propose a systematic three-stage pipeline for detecting dementia from exam recordings. In the first stage, we perform audio-based speaker diarization (i.e., estimating who spoke when?) by incorporating speaker discriminative features. In the second stage, we employ text-based language models to identify the role of the speaker (i.e., examiner or subject). Finally, in the third stage, we employ text- and audio-based models to detect cognitive impairment from hypothesized subject segments. Our studies suggest that incorporating audio-based diarization followed by text-based role identification helps mitigate the influences from the examiner`s segments. Further, we found that the text and audio modalities complement each other, and the performance improves when we use both modalities. We also perform several carefully designed experimental studies to assess the performance of each stage. | null | null | 10.18653/v1/2022.findings-emnlp.386 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,898 |
inproceedings | abu-farha-etal-2022-sarcasm | Sarcasm Detection is Way Too Easy! An Empirical Comparison of Human and Machine Sarcasm Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.387/ | Abu Farha, Ibrahim and Wilson, Steven and Oprea, Silviu and Magdy, Walid | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5284--5295 | Recently, author-annotated sarcasm datasets, which focus on intended, rather than perceived sarcasm, have been introduced. Although datasets collected using first-party annotation have important benefits, there is no comparison of human and machine performance on these new datasets. In this paper, we collect new annotations to provide human-level benchmarks for these first-party annotated sarcasm tasks in both English and Arabic, and compare the performance of human annotators to that of state-of-the-art sarcasm detection systems. Our analysis confirms that sarcasm detection is extremely challenging, with individual humans performing close to or slightly worse than the best trained models. With majority voting, however, humans are able to achieve the best results on all tasks. We also perform error analysis, finding that some of the most challenging examples are those that require additional context. We also highlight common features and patterns used to express sarcasm in English and Arabic such as idioms and proverbs. We suggest that to better capture sarcasm, future sarcasm detection datasets and models should focus on representing conversational and cultural context while leveraging world knowledge and common sense. | null | null | 10.18653/v1/2022.findings-emnlp.387 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,899 |
inproceedings | shi-etal-2022-cross | Cross-lingual Text-to-{SQL} Semantic Parsing with Representation Mixup | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.388/ | Shi, Peng and Song, Linfeng and Jin, Lifeng and Mi, Haitao and Bai, He and Lin, Jimmy and Yu, Dong | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5296--5306 | We focus on the cross-lingual Text-to-SQL semantic parsing task,where the parsers are expected to generate SQL for non-English utterances based on English database schemas.Intuitively, English translation as side information is an effective way to bridge the language gap,but noise introduced by the translation system may affect parser effectiveness.In this work, we propose a Representation Mixup Framework (Rex) for effectively exploiting translations in the cross-lingual Text-to-SQL task.Particularly, it uses a general encoding layer, a transition layer, and a target-centric layer to properly guide the information flow of the English translation.Experimental results on CSpider and VSpider show that our framework can benefit from cross-lingual training and improve the effectiveness of semantic parsers, achieving state-of-the-art performance. | null | null | 10.18653/v1/2022.findings-emnlp.388 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,900 |
inproceedings | armstrong-etal-2022-jampatoisnli | {J}am{P}atois{NLI}: A Jamaican Patois Natural Language Inference Dataset | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.389/ | Armstrong, Ruth-Ann and Hewitt, John and Manning, Christopher | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5307--5320 | JamPatoisNLI provides the first dataset for natural language inference in a creole language, Jamaican Patois.Many of the most-spoken low-resource languages are creoles. These languages commonly have a lexicon derived from a major world language and a distinctive grammar reflecting the languages of the original speakers and the process of language birth by creolization. This gives them a distinctive place in exploring the effectiveness of transfer from large monolingual or multilingual pretrained models. While our work, along with previous work, shows that transfer from these models to low-resource languages that are unrelated to languages in their training set is not very effective, we would expect stronger results from transfer to creoles. Indeed, our experiments show considerably better results from few-shot learning of JamPatoisNLI than for such unrelated languages, and help us begin to understand how the unique relationship between creoles and their high-resource base languages affect cross-lingual transfer. JamPatoisNLI, which consists of naturally-occurring premises and expert-written hypotheses, is a step towards steering research into a traditionally underserved language and a useful benchmark for understanding cross-lingual NLP. | null | null | 10.18653/v1/2022.findings-emnlp.389 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,901 |
inproceedings | hoyle-etal-2022-neural | Are Neural Topic Models Broken? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.390/ | Hoyle, Alexander Miserlis and Sarkar, Rupak and Goel, Pranav and Resnik, Philip | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5321--5344 | Recently, the relationship between automated and human evaluation of topic models has been called into question. Method developers have staked the efficacy of new topic model variants on automated measures, and their failure to approximate human preferences places these models on uncertain ground. Moreover, existing evaluation paradigms are often divorced from real-world use.Motivated by content analysis as a dominant real-world use case for topic modeling, we analyze two related aspects of topic models that affect their effectiveness and trustworthiness in practice for that purpose: the stability of their estimates and the extent to which the model`s discovered categories align with human-determined categories in the data. We find that neural topic models fare worse in both respects compared to an established classical method. We take a step toward addressing both issues in tandem by demonstrating that a straightforward ensembling method can reliably outperform the members of the ensemble. | null | null | 10.18653/v1/2022.findings-emnlp.390 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,902 |
inproceedings | cho-etal-2022-know | Know Thy Strengths: Comprehensive Dialogue State Tracking Diagnostics | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.391/ | Cho, Hyundong and Sankar, Chinnadhurai and Lin, Christopher and Sadagopan, Kaushik Ram and Shayandeh, Shahin and Celikyilmaz, Asli and May, Jonathan and Beirami, Ahmad | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5345--5359 | Recent works that revealed the vulnerability of dialogue state tracking (DST) models to distributional shifts have made holistic comparisons on robustness and qualitative analyses increasingly important for understanding their relative performance. We present our findings from standardized and comprehensive DST diagnoses, which have previously been sparse and uncoordinated, using our toolkit, CheckDST, a collection of robustness tests and failure mode analytics. We discover that different classes of DST models have clear strengths and weaknesses, where generation models are more promising for handling language variety while span-based classification models are more robust to unseen entities. Prompted by this discovery, we also compare checkpoints from the same model and find that the standard practice of selecting checkpoints using validation loss/accuracy is prone to overfitting and each model class has distinct patterns of failure. Lastly, we demonstrate how our diagnoses motivate a pre-finetuning procedure with non-dialogue data that offers comprehensive improvements to generation models by alleviating the impact of distributional shifts through transfer learning. | null | null | 10.18653/v1/2022.findings-emnlp.391 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,903 |
inproceedings | ma-etal-2022-open-domain | Open-domain Question Answering via Chain of Reasoning over Heterogeneous Knowledge | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.392/ | Ma, Kaixin and Cheng, Hao and Liu, Xiaodong and Nyberg, Eric and Gao, Jianfeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5360--5374 | We propose a novel open-domain question answering (ODQA) framework for answering single/multi-hop questions across heterogeneous knowledge sources.The key novelty of our method is the introduction of the intermediary modules into the current retriever-reader pipeline.Unlike previous methods that solely rely on the retriever for gathering all evidence in isolation,our intermediary performs a chain of reasoning over the retrieved set.Specifically, our method links the retrieved evidence with its related global context into graphs and organizes them into a candidate list of evidence chains.Built upon pretrained language models, our system achieves competitive performance on two ODQA datasets, OTT-QA and NQ, against tables and passages from Wikipedia.In particular, our model substantially outperforms the previous state-of-the-art on OTT-QA with an exact match score of 47.3 (45{\%} relative gain). | null | null | 10.18653/v1/2022.findings-emnlp.392 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,904 |
inproceedings | clouatre-etal-2022-detecting | Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.393/ | Clouatre, Louis and Parthasarathi, Prasanna and Zouaq, Amal and Chandar, Sarath | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5375--5396 | Providing better language tools for low-resource and endangered languages is imperative for equitable growth.Recent progress with massively multilingual pretrained models has proven surprisingly effective at performing zero-shot transfer to a wide variety of languages.However, this transfer is not universal, with many languages not currently understood by multilingual approaches.It is estimated that only 72 languages possess a {\textquotedblleft}small set of labeled datasets{\textquotedblright} on which we could test a model`s performance, the vast majority of languages not having the resources available to simply evaluate performances on.In this work, we attempt to clarify which languages do and do not currently benefit from such transfer.To that end, we develop a general approach that requires only unlabelled text to detect which languages are not well understood by a cross-lingual model.Our approach is derived from the hypothesis that if a model`s understanding is insensitive to perturbations to text in a language, it is likely to have a limited understanding of that language.We construct a cross-lingual sentence similarity task to evaluate our approach empirically on 350, primarily low-resource, languages. | null | null | 10.18653/v1/2022.findings-emnlp.393 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,905 |
inproceedings | ofer-shahaf-2022-cards | Cards Against {AI}: Predicting Humor in a Fill-in-the-blank Party Game | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.394/ | Ofer, Dan and Shahaf, Dafna | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5397--5403 | Humor is an inherently social phenomenon, with humorous utterances shaped by what is socially and culturally accepted. Understanding humor is an important NLP challenge, with many applications to human-computer interactions. In this work we explore humor in the context of Cards Against Humanity {--} a party game where players complete fill-in-the-blank statements using cards that can be offensive or politically incorrect.We introduce a novel dataset of 300,000 online games of Cards Against Humanity, including 785K unique jokes, analyze it and provide insights. We trained machine learning models to predict the winning joke per game, achieving performance twice as good (20{\%}) as random, even without any user information.On the more difficult task of judging novel cards, we see the models' ability to generalize is moderate. Interestingly, we find that our models are primarily focused on punchline card, with the context having little impact.Analyzing feature importance, we observe that short, crude, juvenile punchlines tend to win. | null | null | 10.18653/v1/2022.findings-emnlp.394 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,906 |
inproceedings | jiao-etal-2022-open | Open-Vocabulary Argument Role Prediction For Event Extraction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.395/ | Jiao, Yizhu and Li, Sha and Xie, Yiqing and Zhong, Ming and Ji, Heng and Han, Jiawei | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5404--5418 | The argument role in event extraction refers to the relation between an event and an argument participating in it. Despite the great progress in event extraction, existing studies still depend on roles pre-defined by domain experts. These studies expose obvious weakness when extending to emerging event types or new domains without available roles. Therefore, more attention and effort needs to be devoted to automatically customizing argument roles. In this paper, we define this essential but under-explored task: \textbf{open-vocabulary argument role prediction}. The goal of this task is to infer a set of argument roles for a given event type. We propose a novel unsupervised framework, RolePred for this task. Specifically, we formulate the role prediction problem as an in-filling task and construct prompts for a pre-trained language model to generate candidate roles. By extracting and analyzing the candidate arguments, the event-specific roles are further merged and selected. To standardize the research of this task, we collect a new human-annotated event extraction dataset including 143 customized argument roles with rich semantics. On this dataset, RolePred outperforms the existing methods by a large margin. | null | null | 10.18653/v1/2022.findings-emnlp.395 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,907 |
inproceedings | arora-etal-2022-token | Token-level Sequence Labeling for Spoken Language Understanding using Compositional End-to-End Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.396/ | Arora, Siddhant and Dalmia, Siddharth and Yan, Brian and Metze, Florian and Black, Alan W and Watanabe, Shinji | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5419--5429 | End-to-end spoken language understanding (SLU) systems are gaining popularity over cascaded approaches due to their simplicity and ability to avoid error propagation. However, these systems model sequence labeling as a sequence prediction task causing a divergence from its well-established token-level tagging formulation. We build compositional end-to-end SLU systems that explicitly separate the added complexity of recognizing spoken mentions in SLU from the NLU task of sequence labeling. By relying on intermediate decoders trained for ASR, our end-to-end systems transform the input modality from speech to token-level representations that can be used in the traditional sequence labeling framework. This composition of ASR and NLU formulations in our end-to-end SLU system offers direct compatibility with pre-trained ASR and NLU systems, allows performance monitoring of individual components and enables the use of globally normalized losses like CRF, making them attractive in practical scenarios. Our models outperform both cascaded and direct end-to-end models on a labeling task of named entity recognition across SLU benchmarks. | null | null | 10.18653/v1/2022.findings-emnlp.396 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,908 |
inproceedings | toshniwal-etal-2022-baked | Baked-in State Probing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.397/ | Toshniwal, Shubham and Wiseman, Sam and Livescu, Karen and Gimpel, Kevin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5430--5435 | Neural language models have been analyzed for their linguistic and extra-linguistic knowledge via probing. Of particular interest has been the following question: how much can a language model trained only on form learn about meaning? Recent work has demonstrated via probing classifiers that in the setting of simple procedural text, where by {\textquotedblleft}meaning{\textquotedblright} we mean the underlying world state, language models have a non-trivial performance on world state tracking. However, our proposed evaluation based on model predictions shows differing results, suggesting that these models are either not capturing the world state or not using it. How do these results change if the model has access to the world state? We explore this alternate setting with access to the underlying world state only during training and investigate ways of {\textquotedblleft}baking in{\textquotedblright} the state knowledge along with the primary task of language modeling. Our proposed approaches allow for state probing during inference simply via text prompts, avoiding any probing classifier machinery. In terms of performance, we show that baking in the state knowledge during training leads to significant improvements in state tracking performance and text generation quality, | null | null | 10.18653/v1/2022.findings-emnlp.397 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,909 |
inproceedings | lu-etal-2022-clinicalt5 | {C}linical{T}5: A Generative Language Model for Clinical Text | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.398/ | Lu, Qiuhao and Dou, Dejing and Nguyen, Thien | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5436--5443 | In the past few years, large pre-trained language models (PLMs) have been widely adopted in different areas and have made fundamental improvements over a variety of downstream tasks in natural language processing (NLP). Meanwhile, domain-specific variants of PLMs are being proposed to address the needs of domains that demonstrate a specific pattern of writing and vocabulary, e.g., BioBERT for the biomedical domain and ClinicalBERT for the clinical domain. Recently, generative language models like BART and T5 are gaining popularity with their competitive performance on text generation as well as on tasks cast as generative problems. However, in the clinical domain, such domain-specific generative variants are still underexplored. To address this need, our work introduces a T5-based text-to-text transformer model pre-trained on clinical text, i.e., ClinicalT5. We evaluate the proposed model both intrinsically and extrinsically over a diverse set of tasks across multiple datasets, and show that ClinicalT5 dramatically outperforms T5 in the domain-specific tasks and compares favorably with its close baselines. | null | null | 10.18653/v1/2022.findings-emnlp.398 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,910 |
inproceedings | you-etal-2022-find | Find Someone Who: Visual Commonsense Understanding in Human-Centric Grounding | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.399/ | You, Haoxuan and Sun, Rui and Wang, Zhecan and Chang, Kai-Wei and Chang, Shih-Fu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5444--5454 | From a visual scene containing multiple people, human is able to distinguish each individual given the context descriptions about what happened before, their mental/physical states or intentions, etc. Above ability heavily relies on human-centric commonsense knowledge and reasoning. For example, if asked to identify the {\textquotedblleft}person who needs healing{\textquotedblright} in an image, we need to first know that they usually have injuries or suffering expressions, then find the corresponding visual clues before finally grounding the person. We present a new commonsense task, Human-centric Commonsense Grounding, that tests the models' ability to ground individuals given the context descriptions about what happened before, and their mental/physical states or intentions. We further create a benchmark, HumanCog, a dataset with 130k grounded commonsensical descriptions annotated on 67k images, covering diverse types of commonsense and visual scenes. We set up a context-object-aware method as a strong baseline that outperforms previous pre-trained and non-pretrained models. Further analysis demonstrates that rich visual commonsense and powerful integration of multi-modal commonsense are essential, which sheds light on future works. Data and code will be available at https://github.com/Hxyou/HumanCog. | null | null | 10.18653/v1/2022.findings-emnlp.399 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,911 |
inproceedings | rajaby-faghihi-etal-2022-crisisltlsum | {C}risis{LTLS}um: A Benchmark for Local Crisis Event Timeline Extraction and Summarization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.400/ | Rajaby Faghihi, Hossein and Alhafni, Bashar and Zhang, Ke and Ran, Shihao and Tetreault, Joel and Jaimes, Alejandro | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5455--5477 | Social media has increasingly played a key role in emergency response: first responders can use public posts to better react to ongoing crisis events and deploy the necessary resources where they are most needed. Timeline extraction and abstractive summarization are critical technical tasks to leverage large numbers of social media posts about events. Unfortunately, there are few datasets for benchmarking technical approaches for those tasks. This paper presents , the largest dataset of local crisis event timelines available to date. contains 1,000 crisis event timelines across four domains: wildfires, local fires, traffic, and storms. We built using a semi-automated cluster-then-refine approach to collect data from the public Twitter stream. Our initial experiments indicate a significant gap between the performance of strong baselines compared to the human performance on both tasks.Our dataset, code, and models are publicly available (https://github.com/CrisisLTLSum/CrisisTimelines). | null | null | 10.18653/v1/2022.findings-emnlp.400 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,912 |
inproceedings | tu-etal-2022-prompt | Prompt-Tuning Can Be Much Better Than Fine-Tuning on Cross-lingual Understanding With Multilingual Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.401/ | Tu, Lifu and Xiong, Caiming and Zhou, Yingbo | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5478--5485 | Pre-trained multilingual language models show significant performance gains for zero-shot cross-lingual model transfer on a wide range of natural language understanding (NLU) tasks. Previously, for zero-shot cross-lingual evaluation, pre-trained models are only fine-tuned on English data and tested on a variety of target languages. In this paper, we do cross-lingualevaluation on various NLU tasks (sentence classification, sequence labeling, question answering) using prompt-tuning and compare it with fine-tuning. The results show that prompt tuning achieves much better cross-lingual transfer than fine-tuning across datasets, with only 0.1{\%} to 0.3{\%} tuned parameters. Additionally, we demonstrate through the analysis that prompt tuning can have better cross-lingual transfer-ability of representations on downstream tasks with better aligned decision boundaries. | null | null | 10.18653/v1/2022.findings-emnlp.401 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,913 |
inproceedings | higuchi-etal-2022-bert | {BERT} Meets {CTC}: New Formulation of End-to-End Speech Recognition with Pre-trained Masked Language Model | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.402/ | Higuchi, Yosuke and Yan, Brian and Arora, Siddhant and Ogawa, Tetsuji and Kobayashi, Tetsunori and Watanabe, Shinji | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5486--5503 | This paper presents BERT-CTC, a novel formulation of end-to-end speech recognition that adapts BERT for connectionist temporal classification (CTC). Our formulation relaxes the conditional independence assumptions used in conventional CTC and incorporates linguistic knowledge through the explicit output dependency obtained by BERT contextual embedding. BERT-CTC attends to the full contexts of the input and hypothesized output sequences via the self-attention mechanism. This mechanism encourages a model to learn inner/inter-dependencies between the audio and token representations while maintaining CTC`s training efficiency. During inference, BERT-CTC combines a mask-predict algorithm with CTC decoding, which iteratively refines an output sequence. The experimental results reveal that BERT-CTC improves over conventional approaches across variations in speaking styles and languages. Finally, we show that the semantic representations in BERT-CTC are beneficial towards downstream spoken language understanding tasks. | null | null | 10.18653/v1/2022.findings-emnlp.402 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,914 |
inproceedings | tang-etal-2022-etrica | {E}tri{CA}: Event-Triggered Context-Aware Story Generation Augmented by Cross Attention | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.403/ | Tang, Chen and Lin, Chenghua and Huang, Henglin and Guerin, Frank and Zhang, Zhihao | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5504--5518 | One of the key challenges of automatic story generation is how to generate a long narrative that can maintain fluency, relevance, and coherence. Despite recent progress, current story generation systems still face the challenge of how to effectively capture contextual and event features, which has a profound impact on a model`s generation performance. To address these challenges, we present EtriCA, a novel neural generation model, which improves the relevance and coherence of the generated stories through residually mapping context features to event sequences with a cross-attention mechanism. Such a feature capturing mechanism allows our model to better exploit the logical relatedness between events when generating stories. Extensive experiments based on both automatic and human evaluations show that our model significantly outperforms state-of-the-art baselines, demonstrating the effectiveness of our model in leveraging context and event features. | null | null | 10.18653/v1/2022.findings-emnlp.403 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,915 |
inproceedings | huang-etal-2022-ladis | {LADIS}: Language Disentanglement for 3{D} Shape Editing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.404/ | Huang, Ian and Achlioptas, Panos and Zhang, Tianyi and Tulyakov, Sergei and Sung, Minhyuk and Guibas, Leonidas | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5519--5532 | Natural language interaction is a promising direction for democratizing 3D shape design. However, existing methods for text-driven 3D shape editing face challenges in producing decoupled, local edits to 3D shapes. We address this problem by learning disentangled latent representations that ground language in 3D geometry. To this end, we propose a complementary tool set including a novel network architecture, a disentanglement loss, and a new editing procedure. Additionally, to measure edit locality, we define a new metric that we call part-wise edit precision. We show that our method outperforms existing SOTA methods by 20{\%} in terms of edit locality, and up to 6.6{\%} in terms of language reference resolution accuracy. Human evaluations additionally show that compared to the existing SOTA, our method produces shape edits that are more local, more semantically accurate, and more visually obvious. Our work suggests that by solely disentangling language representations, downstream 3D shape editing can become more local to relevant parts, even if the model was never given explicit part-based supervision. | null | null | 10.18653/v1/2022.findings-emnlp.404 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,916 |
inproceedings | di-liello-etal-2022-effective | Effective Pretraining Objectives for Transformer-based Autoencoders | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.405/ | Di Liello, Luca and Gabburo, Matteo and Moschitti, Alessandro | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5533--5547 | In this paper, we study trade-offs between efficiency, cost and accuracy when pre-training Transformer encoders with different pre-training objectives. For this purpose, we analyze features of common objectives and combine them to create new effective pre-training approaches. Specifically, we designed light token generators based on a straightforward statistical approach, which can replace ELECTRA computationally heavy generators, thus highly reducing cost. Our experiments also show that (i) there are more efficient alternatives to BERT`s MLM, and (ii) it is possible to efficiently pre-train Transformer-based models using lighter generators without a significant drop in performance. | null | null | 10.18653/v1/2022.findings-emnlp.405 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,917 |
inproceedings | qian-yan-2022-language | Language Model Detoxification in Dialogue with Contextualized Stance Control | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.406/ | Qian, Jing and Yan, Xifeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5548--5558 | To reduce the toxic degeneration in a pretrained Language Model (LM), previous work on Language Model detoxification has focused on reducing the toxicity of the generation itself (self-toxicity) without consideration of the context. As a result, a type of implicit offensive language where the generations support the offensive language in the context is ignored. Different from the LM controlling tasks in previous work, where the desired attributes are fixed for generation, the desired stance of the generation depends on the offensiveness of the context. Therefore, we propose a novel control method to do context-dependent detoxification with the stance taken into consideration. We introduce meta prefixes to learn the contextualized stance control strategy and to generate the stance control prefix according to the input context. The generated stance prefix is then combined with the toxicity control prefix to guide the response generation. Experimental results show that our proposed method can effectively learn the context-dependent stance control strategies while keeping a low self-toxicity of the underlying LM. | null | null | 10.18653/v1/2022.findings-emnlp.406 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,918 |
inproceedings | lai-etal-2022-multilingual-subevent | Multilingual {S}ub{E}vent Relation Extraction: A Novel Dataset and Structure Induction Method | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.407/ | Lai, Viet and Man, Hieu and Ngo, Linh and Dernoncourt, Franck and Nguyen, Thien | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5559--5570 | Subevent Relation Extraction (SRE) is a task in Information Extraction that aims to recognize spatial and temporal containment relations between event mentions in text. Recent methods have utilized pre-trained language models to represent input texts for SRE. However, a key issue in existing SRE methods is the employment of sequential order of words in texts to feed into representation learning methods, thus unable to explicitly focus on important context words and their interactions to enhance representations. In this work, we introduce a new method for SRE that learns to induce effective graph structures for input texts to boost representation learning. Our method features a word alignment framework with dependency paths and optimal transport to identify important context words to form effective graph structures for SRE. In addition, to enable SRE research on non-English languages, we present a new multilingual SRE dataset for five typologically different languages. Extensive experiments reveal the state-of-the-art performance for our method on different datasets and languages. | null | null | 10.18653/v1/2022.findings-emnlp.407 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,919 |
inproceedings | ravishankar-etal-2022-two | A Two-Stage Approach towards Generalization in Knowledge Base Question Answering | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.408/ | Ravishankar, Srinivas and Thai, Dung and Abdelaziz, Ibrahim and Mihindukulasooriya, Nandana and Naseem, Tahira and Kapanipathi, Pavan and Rossiello, Gaetano and Fokoue, Achille | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5571--5580 | Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes. However, many popular knowledge bases share similarities in their underlying schemas that can be leveraged to facilitate generalization across knowledge bases. To achieve this generalization, we introduce a KBQA framework based on a 2-stage architecture that explicitly separates semantic parsing from the knowledge base interaction, facilitating transfer learning across datasets and knowledge graphs. We show that pretraining on datasets with a different underlying knowledge base can nevertheless provide significant performance gains and reduce sample complexity. Our approach achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG). | null | null | 10.18653/v1/2022.findings-emnlp.408 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,920 |
inproceedings | lei-huang-2022-shot | Few-Shot (Dis)Agreement Identification in Online Discussions with Regularized and Augmented Meta-Learning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.409/ | Lei, Yuanyuan and Huang, Ruihong | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5581--5593 | Online discussions are abundant with opinions towards a common topic, and identifying (dis)agreement between a pair of comments enables many opinion mining applications. Realizing the increasing needs to analyze opinions for emergent new topics that however tend to lack annotations, we present the first meta-learning approach for few-shot (dis)agreement identification that can be quickly applied to analyze opinions for new topics with few labeled instances. Furthermore, we enhance the meta-learner`s domain generalization ability from two perspectives. The first is domain-invariant regularization, where we design a lexicon-based regularization loss to enable the meta-learner to learn domain-invariant cues. The second is domain-aware augmentation, where we propose domain-aware task augmentation for meta-training to learn domain-specific expressions. In addition to using an existing dataset, we also evaluate our approach on two very recent new topics, mask mandate and COVID vaccine, using our newly annotated datasets containing 1.5k and 1.4k SubReddits comment pairs respectively. Extensive experiments on three domains/topics demonstrate the effectiveness of our meta-learning approach. | null | null | 10.18653/v1/2022.findings-emnlp.409 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,921 |
inproceedings | richburg-carpuat-2022-data | Data Cartography for Low-Resource Neural Machine Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.410/ | Richburg, Aquia and Carpuat, Marine | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5594--5607 | While collecting or generating more parallel data is necessary to improve machine translation (MT) in low-resource settings, we lack an understanding of how the limited amounts of existing data are actually used to help guide the collection of further resources. In this paper, we apply data cartography techniques (Swayamdipta et al., 2020) to characterize the contribution of training samples in two low-resource MT tasks (Swahili-English and Turkish-English) throughout the training of standard neural MT models. Our empirical study shows that, unlike in prior work for classification tasks, most samples contribute to model training in low-resource MT, albeit not uniformly throughout the training process. Furthermore, uni-dimensional characterizations of samples {--} e.g., based on dual cross-entropy or word frequency {--} do not suffice to characterize to what degree they are hard or easy to learn. Taken together, our results suggest that data augmentation strategies for low-resource MT would benefit from model-in-the-loop strategies to maximize improvements. | null | null | 10.18653/v1/2022.findings-emnlp.410 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,922 |
inproceedings | liu-etal-2022-augmenting-multi | Augmenting Multi-Turn Text-to-{SQL} Datasets with Self-Play | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.411/ | Liu, Qi and Ye, Zihuiwen and Yu, Tao and Song, Linfeng and Blunsom, Phil | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5608--5620 | The task of context-dependent text-to-SQL aims to convert multi-turn user utterances to formal SQL queries. This is a challenging task due to both the scarcity of training data from which to learn complex contextual dependencies and to generalize to unseen databases. In this paper we explore augmenting the training datasets using self-play, which leverages contextual information to synthesize new interactions to adapt the model to new databases. We first design a SQL-to-text model conditioned on a sampled goal query, which represents a user`s intent, that then converses with a text-to-SQL semantic parser to generate new interactions. We then filter the synthesized interactions and retrain the models with the augmented data. We find that self-play improves the accuracy of a strong baseline on SParC and CoSQL, two widely used cross-domain text-to-SQL datasets. Our analysis shows that self-play simulates various conversational thematic relations, enhances cross-domain generalization and improves beam-search. | null | null | 10.18653/v1/2022.findings-emnlp.411 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,923 |
inproceedings | wingate-etal-2022-prompt | Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.412/ | Wingate, David and Shoeybi, Mohammad and Sorensen, Taylor | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5621--5634 | We explore the idea of compressing the prompts used to condition language models, and show that compressed prompts can retain a substantive amount of information about the original prompt. For severely compressed prompts, while fine-grained information is lost, abstract information and general sentiments can be retained with surprisingly few parameters, which can be useful in the context of decode-time algorithms for controllability and toxicity reduction. We find that some complex prompts can be effectively compressed into a single token to guide generation. We also show that compressed prompts are largely compositional, and can be constructed such that they can be used to control independent aspects of generated text. | null | null | 10.18653/v1/2022.findings-emnlp.412 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,924 |
inproceedings | gabriel-etal-2022-naturaladversaries | {N}atural{A}dversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.413/ | Gabriel, Saadia and Palangi, Hamid and Choi, Yejin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5635--5645 | While a substantial body of prior work has explored adversarial example generation for natural language understanding tasks, these examples are often unrealistic and diverge from the real-world data distributions. In this work, we introduce a two-stage adversarial example generation framework (NaturalAdversaries), for designing adversaries that are effective at fooling a given classifier and demonstrate natural-looking failure cases that could plausibly occur during in-the-wild deployment of the models. At the first stage a token attribution method is used to summarize a given classifier`s behavior as a function of the key tokens in the input. In the second stage a generative model is conditioned on the key tokens from the first stage. NaturalAdversaries is adaptable to both black-box and white-box adversarial attacks based on the level of access to the model parameters. Our results indicate these adversaries generalize across domains, and offer insights for future research on improving robustness of neural text classification models. | null | null | 10.18653/v1/2022.findings-emnlp.413 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,925 |
inproceedings | lin-etal-2022-multi-path | Multi-Path Transformer is Better: A Case Study on Neural Machine Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.414/ | Lin, Ye and Zhou, Shuhan and Li, Yanyang and Ma, Anxiang and Xiao, Tong and Zhu, Jingbo | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5646--5656 | For years the model performance in machine learning obeyed a power-law relationship with the model size. For the consideration of parameter efficiency, recent studies focus on increasing model depth rather than width to achieve better performance. In this paper, we study how model width affects the Transformer model through a parameter-efficient multi-path structure. To better fuse features extracted from different paths, we add three additional operations to each sublayer: a normalization at the end of each path, a cheap operation to produce more features, and a learnable weighted mechanism to fuse all features flexibly. Extensive experiments on 12 WMT machine translation tasks show that, with the same number of parameters, the shallower multi-path model can achieve similar or even better performance than the deeper model. It reveals that we should pay more attention to the multi-path structure, and there should be a balance between the model depth and width to train a better large-scale Transformer. | null | null | 10.18653/v1/2022.findings-emnlp.414 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,926 |
inproceedings | lu-etal-2022-unsupervised | Unsupervised Learning of Hierarchical Conversation Structure | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.415/ | Lu, Bo-Ru and Hu, Yushi and Cheng, Hao and Smith, Noah A. and Ostendorf, Mari | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5657--5670 | Human conversations can evolve in many different ways, creating challenges for automatic understanding and summarization. Goal-oriented conversations often have meaningful sub-dialogue structure, but it can be highly domain-dependent. This work introduces an unsupervised approach to learning hierarchical conversation structure, including turn and sub-dialogue segment labels, corresponding roughly to dialogue acts and sub-tasks, respectively. The decoded structure is shown to be useful in enhancing neural models of language for three conversation-level understanding tasks. Further, the learned finite-state sub-dialogue network is made interpretable through automatic summarization. | null | null | 10.18653/v1/2022.findings-emnlp.415 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,927 |
inproceedings | zhang-etal-2022-task | Task Compass: Scaling Multi-task Pre-training with Task Prefix | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.416/ | Zhang, Zhuosheng and Wang, Shuohang and Xu, Yichong and Fang, Yuwei and Yu, Wenhao and Liu, Yang and Zhao, Hai and Zhu, Chenguang and Zeng, Michael | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5671--5685 | Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models. Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks. To tackle the challenge, we propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks. We conduct extensive experiments on 40 datasets, which show that our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships. The task relationships reflected by the prefixes align transfer learning performance between tasks. They also suggest directions for data augmentation with complementary tasks, which help our model achieve human-parity results on commonsense reasoning leaderboards. Code is available at https://github.com/cooelf/CompassMTL. | null | null | 10.18653/v1/2022.findings-emnlp.416 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,928 |
inproceedings | zhou-etal-2022-sharpness | Sharpness-Aware Minimization with Dynamic Reweighting | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.417/ | Zhou, Wenxuan and Liu, Fangyu and Zhang, Huan and Chen, Muhao | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5686--5699 | Deep neural networks are often overparameterized and may not easily achieve model generalization. Adversarial training has shown effectiveness in improving generalization by regularizing the change of loss on top of adversarially chosen perturbations. The recently proposed sharpness-aware minimization (SAM) algorithm conducts adversarial weight perturbation, encouraging the model to converge to a flat minima. SAM finds a common adversarial weight perturbation per-batch. Although per-instance adversarial weight perturbations are stronger adversaries and can potentially lead to better generalization performance, their computational cost is very high and thus it is impossible to use per-instance perturbations efficiently in SAM. In this paper, we tackle this efficiency bottleneck and propose sharpness-aware minimization with dynamic reweighting (delta-SAM). Our theoretical analysis motivates that it is possible to approach the stronger, per-instance adversarial weight perturbations using reweighted per-batch weight perturbations. delta-SAM dynamically reweights perturbation within each batch according to the theoretically principled weighting factors, serving as a good approximation to per-instance perturbation. Experiments on various natural language understanding tasks demonstrate the effectiveness of delta-SAM. | null | null | 10.18653/v1/2022.findings-emnlp.417 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,929 |
inproceedings | soni-etal-2022-predicting | Predicting Long-Term Citations from Short-Term Linguistic Influence | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.418/ | Soni, Sandeep and Bamman, David and Eisenstein, Jacob | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5700--5716 | A standard measure of the influence of a research paper is the number of times it is cited. However, papers may be cited for many reasons, and citation count is not informative about the extent to which a paper affected the content of subsequent publications. We therefore propose a novel method to quantify linguistic influence in timestamped document collections. There are two main steps: first, identify lexical and semantic changes using contextual embeddings and word frequencies; second, aggregate information about these changes into per-document influence parameters by estimating a high-dimensional Hawkes process with a low-rank parameter matrix. The resulting measures of linguistic influence are predictive of \textit{future} citations. Specifically, the estimate of linguistic influence from the two years after a paper`s publication is correlated with and predictive of its citation count in the following three years. This is demonstrated using an online evaluation with incremental temporal training/test splits, in comparison with a strong baseline that includes predictors for initial citation counts, topics, and lexical features. | null | null | 10.18653/v1/2022.findings-emnlp.418 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,930 |
inproceedings | kim-etal-2022-joint | Joint Audio/Text Training for Transformer Rescorer of Streaming Speech Recognition | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.419/ | Kim, Suyoun and Li, Ke and Kabela, Lucas and Huang, Ron and Zhu, Jiedan and Kalinli, Ozlem and Le, Duc | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5717--5722 | Recently, there has been an increasing interest in two-pass streaming end-to-end speech recognition (ASR) that incorporates a 2nd-pass rescoring model on top of the conventional 1st-pass streaming ASR model to improve recognition accuracy while keeping latency low. One of the latest 2nd-pass rescoring model, Transformer Rescorer, takes the n-best initial outputs and audio embeddings from the 1st-pass model, and then choose the best output by re-scoring the n-best initial outputs. However, training this Transformer Rescorer requires expensive paired audio-text training data because the model uses audio embeddings as input. In this work, we present our Joint Audio/Text training method for Transformer Rescorer, to leverage unpaired text-only data which is relatively cheaper than paired audio-text data. We evaluate Transformer Rescorer with our Joint Audio/Text training on Librispeech dataset as well as our large-scale in-house dataset and show that our training method can improve word error rate (WER) significantly compared to standard Transformer Rescorer without requiring any extra model parameters or latency. | null | null | 10.18653/v1/2022.findings-emnlp.419 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,931 |
inproceedings | srinivasan-choi-2022-tydip | {T}y{D}i{P}: A Dataset for Politeness Classification in Nine Typologically Diverse Languages | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.420/ | Srinivasan, Anirudh and Choi, Eunsol | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5723--5738 | We study politeness phenomena in nine typologically diverse languages. Politeness is an important facet of communication and is sometimes argued to be cultural-specific, yet existing computational linguistic study is limited to English. We create TyDiP, a dataset containing three-way politeness annotations for 500 examples in each language, totaling 4.5K examples. We evaluate how well multilingual models can identify politeness levels {--} they show a fairly robust zero-shot transfer ability, yet fall short of estimated human accuracy significantly. We further study mapping the English politeness strategy lexicon into nine languages via automatic translation and lexicon induction, analyzing whether each strategy`s impact stays consistent across languages. Lastly, we empirically study the complicated relationship between formality and politeness through transfer experiments. We hope our dataset will support various research questions and applications, from evaluating multilingual models to constructing polite multilingual agents. | null | null | 10.18653/v1/2022.findings-emnlp.420 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,932 |
inproceedings | ma-etal-2022-probing | Probing Cross-modal Semantics Alignment Capability from the Textual Perspective | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.421/ | Ma, Zheng and Zong, Shi and Pan, Mianzhi and Zhang, Jianbing and Huang, Shujian and Dai, Xinyu and Chen, Jiajun | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5739--5749 | In recent years, vision and language pre-training (VLP) models have advanced the state-of-the-art results in a variety of cross-modal downstream tasks. Aligning cross-modal semantics is claimed to be one of the essential capabilities of VLP models. However, it still remains unclear about the inner working mechanism of alignment in VLP models. In this paper, we propose a new probing method that is based on image captioning to first empirically study the cross-modal semantics alignment of VLP models. Our probing method is built upon the fact that given an image-caption pair, the VLP models will give a score, indicating how well two modalities are aligned; maximizing such scores will generate sentences that VLP models believe are of good alignment. Analyzing these sentences thus will reveal in what way different modalities are aligned and how well these alignments are in VLP models. We apply our probing method to five popular VLP models, including UNITER, ROSITA, ViLBERT, CLIP, and LXMERT, and provide a comprehensive analysis of the generated captions guided by these models. Our results show that VLP models (1) focus more on just aligning objects with visual words, while neglecting global semantics; (2) prefer fixed sentence patterns, thus ignoring more important textual information including fluency and grammar; and (3) deem the captions with more visual words are better aligned with images. These findings indicate that VLP models still have weaknesses in cross-modal semantics alignment and we hope this work will draw researchers' attention to such problems when designing a new VLP model. | null | null | 10.18653/v1/2022.findings-emnlp.421 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,933 |
inproceedings | xie-etal-2022-hidden | Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.422/ | Xie, Shuo and Qiu, Jiahao and Pasad, Ankita and Du, Li and Qu, Qing and Mei, Hongyuan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5750--5768 | While transferring a pretrained language model, common approaches conventionally attach their task-specific classifiers to the top layer and adapt all the pretrained layers. We investigate whether one could make a task-specific selection on which subset of the layers to adapt and where to place the classifier. The goal is to reduce the computation cost of transfer learning methods (e.g. fine-tuning or adapter-tuning) without sacrificing its performance.We propose to select layers based on the variability of their hidden states given a task-specific corpus. We say a layer is already {\textquotedblleft}well-specialized{\textquotedblright} in a task if the within-class variability of its hidden states is low relative to the between-class variability. Our variability metric is cheap to compute and doesn`t need any training or hyperparameter tuning. It is robust to data imbalance and data scarcity. Extensive experiments on the GLUE benchmark demonstrate that selecting layers based on our metric can yield significantly stronger performance than using the same number of top layers and often match the performance of fine-tuning or adapter-tuning the entire language model. | null | null | 10.18653/v1/2022.findings-emnlp.422 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,934 |
inproceedings | andreas-2022-language | Language Models as Agent Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.423/ | Andreas, Jacob | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5769--5779 | Language models (LMs) are trained on collections of documents, written by individual human agents to achieve specific goals in the outside world. During training, LMs have access only to text of these documents, with no direct evidence of the internal states of the agents that produced them{---}a fact often used to argue that LMs are incapable of modeling goal-directed aspects of human language production and comprehension. Can LMs trained on text learn anything at all about the relationship between language and use? I argue that LMs are models of communicative intentions in a specific, narrow sense. When performing next word prediction given a textual context, an LM can infer and represent properties of an agent likely to have produced that context. These representations can in turn influence subsequent LM generation in the same way that agents' communicative intentions influence their language. I survey findings from the recent literature showing that{---}even in today`s non-robust and error-prone models{---}LMs infer and use representations of fine-grained communicative intentions and high-level beliefs and goals. Despite the limited nature of their training data, they can thus serve as building blocks for systems that communicate and act intentionally. | null | null | 10.18653/v1/2022.findings-emnlp.423 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,935 |
inproceedings | tian-song-2022-combinatory | Combinatory Grammar Tells Underlying Relevance among Entities | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.424/ | Tian, Yuanhe and Song, Yan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5780--5786 | Relation extraction (RE) is an important task in natural language processing which aims to annotate the relation between two given entities, which requires a deep understanding of the running text. To import model performance, existing approaches leverage syntactic information to facilitate the relation extraction process, where they mainly focus on dependencies among words while paying limited attention to other types of syntactic structure. Considering that combinatory categorial grammar (CCG) is a lexicalized grammatical formalism that carries the syntactic and semantic knowledge for text understanding, we propose an alternative solution for RE that takes advantage of CCG to detect the relation between entities. In doing so, we perform a multi-task learning process to learn from RE and auto-annotated CCG supertags, where an attention mechanism is performed over all input words to distinguish the important ones for RE with the attention weights guided by the supertag decoding process. We evaluate our model on two widely used English benchmark datasets (i.e., ACE2005EN and SemEval 2010 Task 8 datasets) for RE, where the effectiveness of our approach is demonstrated by the experimental results with our approach achieving state-of-the-art performance on both datasets. | null | null | 10.18653/v1/2022.findings-emnlp.424 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,936 |
inproceedings | chen-etal-2022-leveraging-open | Leveraging Open Data and Task Augmentation to Automated Behavioral Coding of Psychotherapy Conversations in Low-Resource Scenarios | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.425/ | Chen, Zhuohao and Flemotomos, Nikolaos and Imel, Zac and Atkins, David and Narayanan, Shrikanth | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5787--5795 | In psychotherapy interactions, the quality of a session is assessed by codifying the communicative behaviors of participants during the conversation through manual observation and annotation. Developing computational approaches for automated behavioral coding can reduce the burden on human coders and facilitate the objective evaluation of the intervention. In the real world, however, implementing such algorithms is associated with data sparsity challenges since privacy concerns lead to limited available in-domain data. In this paper, we leverage a publicly available conversation-based dataset and transfer knowledge to the low-resource behavioral coding task by performing an intermediate language model training via meta-learning. We introduce a task augmentation method to produce a large number of {\textquotedblleft}analogy tasks{\textquotedblright} {---} tasks similar to the target one {---} and demonstrate that the proposed framework predicts target behaviors more accurately than all the other baseline models. | null | null | 10.18653/v1/2022.findings-emnlp.425 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,937 |
inproceedings | wang-etal-2022-learning-detect | Learning to Detect Noisy Labels Using Model-Based Features | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.426/ | Wang, Zhihao and Lin, Zongyu and Wen, Junjie and Chen, Xianxin and Liu, Peiqi and Zheng, Guidong and Chen, Yujun and Yang, Zhilin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5796--5808 | Label noise is ubiquitous in various machine learning scenarios such as self-labeling with model predictions and erroneous data annotation. Many existing approaches are based on heuristics such as sample losses, which might not be flexible enough to achieve optimal solutions. Meta learning based methods address this issue by learning a data selection function, but can be hard to optimize. In light of these pros and cons, we propose SENT (Selection-Enhanced Noisy label Training) that does not rely on meta learning while having the flexibility of being data-driven. SENT transfers the noise distribution to a clean set and trains a model to distinguish noisy labels from clean ones using model-based features. Empirically, on a wide range of tasks including text classification and speech recognition, SENT improves performance over strong baselines under the settings of self-training and label corruption. | null | null | 10.18653/v1/2022.findings-emnlp.426 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,938 |
inproceedings | garg-etal-2022-keyphrase | Keyphrase Generation Beyond the Boundaries of Title and Abstract | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.427/ | Garg, Krishna and Ray Chowdhury, Jishnu and Caragea, Cornelia | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5809--5821 | Keyphrase generation aims at generating important phrases (keyphrases) that best describe a given document. In scholarly domains, current approaches have largely used only the title and abstract of the articles to generate keyphrases. In this paper, we comprehensively explore whether the integration of additional information from the full text of a given article or from semantically similar articles can be helpful for a neural keyphrase generation model or not. We discover that adding sentences from the full text, particularly in the form of the extractive summary of the article can significantly improve the generation of both types of keyphrases that are either present or absent from the text. Experimental results with three widely used models for keyphrase generation along with one of the latest transformer models suitable for longer documents, Longformer Encoder-Decoder (LED) validate the observation. We also present a new large-scale scholarly dataset FullTextKP for keyphrase generation. Unlike prior large-scale datasets, FullTextKP includes the full text of the articles along with the title and abstract. We release the source code at https://github.com/kgarg8/FullTextKP. | null | null | 10.18653/v1/2022.findings-emnlp.427 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,939 |
inproceedings | yoshida-oseki-2022-composition | Composition, Attention, or Both? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.428/ | Yoshida, Ryo and Oseki, Yohei | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5822--5834 | In this paper, we propose a novel architecture called Composition Attention Grammars (CAGs) that recursively compose subtrees into a single vector representation with a composition function, and selectively attend to previous structural information with a self-attention mechanism. We investigate whether these components{---}the composition function and the self-attention mechanism{---}can both induce human-like syntactic generalization. Specifically, we train language models (LMs) with and without these two components with the model sizes carefully controlled, and evaluate their syntactic generalization performance against six test circuits on the SyntaxGym benchmark. The results demonstrated that the composition function and the self-attention mechanism both play an important role to make LMs more human-like, and closer inspection of linguistic phenomenon implied that the composition function allowed syntactic features, but not semantic features, to percolate into subtree representations. | null | null | 10.18653/v1/2022.findings-emnlp.428 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,940 |
inproceedings | chiang-etal-2022-cdgp | {CDGP}: Automatic Cloze Distractor Generation based on Pre-trained Language Model | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.429/ | Chiang, Shang-Hsuan and Wang, Ssu-Cheng and Fan, Yao-Chung | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5835--5840 | Manually designing cloze test consumes enormous time and efforts. The major challenge lies in wrong option (distractor) selection. Having carefully-design distractors improves the effectiveness of learner ability assessment. As a result, the idea of automatically generating cloze distractor is motivated. In this paper, we investigate cloze distractor generation by exploring the employment of pre-trained language models (PLMs) as an alternative for candidate distractor generation. Experiments show that the PLM-enhanced model brings a substantial performance improvement. Our best performing model advances the state-of-the-art result from 14.94 to 34.17 (NDCG@10 score). Our code and dataset is available at https://github.com/AndyChiangSH/CDGP. | null | null | 10.18653/v1/2022.findings-emnlp.429 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,941 |
inproceedings | luo-etal-2022-g3 | G3: Geolocation via Guidebook Grounding | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.430/ | Luo, Grace and Biamby, Giscard and Darrell, Trevor and Fried, Daniel and Rohrbach, Anna | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5841--5853 | We demonstrate how language can improve geolocation: the task of predicting the location where an image was taken. Here we study explicit knowledge from human-written guidebooks that describe the salient and class-discriminative visual features humans use for geolocation. We propose the task of Geolocation via Guidebook Grounding that uses a dataset of StreetView images from a diverse set of locations and an associated textual guidebook for GeoGuessr, a popular interactive geolocation game. Our approach predicts a country for each image by attending over the clues automatically extracted from the guidebook. Supervising attention with country-level pseudo labels achieves the best performance. Our approach substantially outperforms a state-of-the-art image-only geolocation method, with an improvement of over 5{\%} in Top-1 accuracy. Our dataset and code can be found at https://github.com/g-luo/geolocation{\_}via{\_}guidebook{\_}grounding. | null | null | 10.18653/v1/2022.findings-emnlp.430 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,942 |
inproceedings | he-etal-2022-controlling | Controlling Bias Exposure for Fair Interpretable Predictions | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.431/ | He, Zexue and Wang, Yu and McAuley, Julian and Majumder, Bodhisattwa Prasad | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5854--5866 | Recent work on reducing bias in NLP models usually focuses on protecting or isolating information related to a sensitive attribute (like gender or race). However, when sensitive information is semantically entangled with the task information of the input, e.g., gender information is predictive for a profession, a fair trade-off between task performance and bias mitigation is difficult to achieve. Existing approaches perform this trade-off by eliminating bias information from the latent space, lacking control over how much bias is necessarily required to be removed. We argue that a favorable debiasing method should use sensitive information {\textquoteleft}fairly', rather than blindly eliminating it (Caliskan et al., 2017; Sun et al., 2019; Bogen et al., 2020). In this work, we provide a novel debiasing algorithm by adjustingthe predictive model`s belief to (1) ignore the sensitive information if it is not useful for the task; (2) use sensitive information minimally as necessary for the prediction (while also incurring a penalty). Experimental results on two text classification tasks (influenced by gender) and an open-ended generation task (influenced by race) indicate that our model achieves a desirable trade-off between debiasing and task performance along with producing debiased rationales as evidence. | null | null | 10.18653/v1/2022.findings-emnlp.431 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,943 |
inproceedings | sun-etal-2022-investigating | Investigating the Benefits of Free-Form Rationales | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.432/ | Sun, Jiao and Swayamdipta, Swabha and May, Jonathan and Ma, Xuezhe | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5867--5882 | Free-form rationales aim to aid model interpretability by supplying the background knowledge that can help understand model decisions. Crowdsourced rationales are provided for commonsense QA instances in popular datasets such as CoS-E and ECQA, but their utility remains under-investigated. We present human studies which show that ECQA rationales indeed provide additional background information to understand a decision, while over 88{\%} of CoS-E rationales do not. Inspired by this finding, we ask: can the additional context provided by free-form rationales benefit models, similar to human users? We investigate the utility of rationales as an additional source of supervision, by varying the quantity and quality of rationales during training. After controlling for instances where rationales leak the correct answer while not providing additional background knowledge, we find that incorporating only 5{\%} of rationales during training can boost model performance by 47.22{\%} for CoS-E and 57.14{\%} for ECQA during inference. Moreover, we also show that rationale quality matters: compared to crowdsourced rationales, T5-generated rationales provide not only weaker supervision to models, but are also not helpful for humans in aiding model interpretability. | null | null | 10.18653/v1/2022.findings-emnlp.432 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,944 |
inproceedings | fang-zhang-2022-data | Data-Efficient Concept Extraction from Pre-trained Language Models for Commonsense Explanation Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.433/ | Fang, Yanbo and Zhang, Yongfeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5883--5893 | Predicting the key explanation concept is essential for generating commonsense explanations. This paper introduces a method to predict the concept from pre-trained language models for commonsense explanation generation. Our experiment found that adopting a language model as the concept extractor and fine-tuning it with 20{\%} training data can improve the quality and accuracy of the generated explanations over multiple evaluation metrics. Compared with conventional methods that search concepts over knowledge graphs, our method does not require the preparation and training models to search through knowledge graphs. To better understand the results from pre-trained language models, we also designed a metric to evaluate the retrieved concepts. Through analysis and experiments, we show the correlation between this metric and the performance of the generators, and we also show the importance of attaching concepts for generating high-quality sentences. | null | null | 10.18653/v1/2022.findings-emnlp.433 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,945 |
inproceedings | ngo-etal-2022-unsupervised | Unsupervised Domain Adaptation for Joint Information Extraction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.434/ | Ngo, Nghia and Min, Bonan and Nguyen, Thien | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5894--5905 | Joint Information Extraction (JIE) aims to jointly solve multiple tasks in the Information Extraction pipeline (e.g., entity mention, event trigger, relation, and event argument extraction). Due to their ability to leverage task dependencies and avoid error propagation, JIE models have presented state-of-the-art performance for different IE tasks. However, an issue with current JIE methods is that they only focus on standard supervised learning setting where training and test data comes from the same domain. Cross-domain/domain adaptation learning with training and test data in different domains have not been explored for JIE, thus hindering the application of this technology to different domains in practice. To address this issue, our work introduces the first study to evaluate performance of JIE models in unsupervised domain adaptation setting. In addition, we present a novel method to induce domain-invariant representations for the tasks in JIE, called Domain Adaptation for Joint Information Extraction (DA4JIE). In DA4JIE, we propose an Instance-relational Domain Adaptation mechanism that seeks to align representations of task instances in JIE across domains through a generalized version of domain-adversarial learning approach. We further devise a Context-invariant Structure Learning technique to filter domain-specialized contextual information from induced representations to boost performance of JIE models in new domains. Extensive experiments and analyses demonstrate that DA4JIE can significantly improve out-of-domain performance for current state-of-the-art JIE systems for all IE tasks. | null | null | 10.18653/v1/2022.findings-emnlp.434 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,946 |
inproceedings | wang-etal-2022-foiling | Foiling Training-Time Attacks on Neural Machine Translation Systems | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.435/ | Wang, Jun and He, Xuanli and Rubinstein, Benjamin and Cohn, Trevor | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5906--5913 | Neural machine translation (NMT) systems are vulnerable to backdoor attacks, whereby an attacker injects poisoned samples into training such that a trained model produces malicious translations. Nevertheless, there is little research on defending against such backdoor attacks in NMT. In this paper, we first show that backdoor attacks that have been successful in text classification are also effective against machine translation tasks. We then present a novel defence method that exploits a key property of most backdoor attacks: namely the asymmetry between the source and target language sentences, which is used to facilitate malicious text insertions, substitutions and suchlike. Our technique uses word alignment coupled with language model scoring to detect outlier tokens, and thus can find and filter out training instances which may contain backdoors. Experimental results demonstrate that our technique can significantly reduce the success of various attacks by up to 89.0{\%}, while not affecting predictive accuracy. | null | null | 10.18653/v1/2022.findings-emnlp.435 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,947 |
inproceedings | sampat-etal-2022-learning | Learning Action-Effect Dynamics for Hypothetical Vision-Language Reasoning Task | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.436/ | Sampat, Shailaja Keyur and Banerjee, Pratyay and Yang, Yezhou and Baral, Chitta | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5914--5924 | {\textquoteleft}Actions' play a vital role in how humans interact with the world. Thus, autonomous agents that would assist us in everyday tasks also require the capability to perform {\textquoteleft}Reasoning about Actions {\&} Change' (RAC). This has been an important research direction in Artificial Intelligence (AI) in general, but the study of RAC with visual and linguistic inputs is relatively recent. The CLEVR{\_}HYP (Sampat et. al., 2021) is one such testbed for hypothetical vision-language reasoning with actions as the key focus. In this work, we propose a novel learning strategy that can improve reasoning about the effects of actions. We implement an encoder-decoder architecture to learn the representation of actions as vectors. We combine the aforementioned encoder-decoder architecture with existing modality parsers and a scene graph question answering model to evaluate our proposed system on the CLEVR{\_}HYP dataset. We conduct thorough experiments to demonstrate the effectiveness of our proposed approach and discuss its advantages over previous baselines in terms of performance, data efficiency, and generalization capability. | null | null | 10.18653/v1/2022.findings-emnlp.436 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,948 |
inproceedings | wang-etal-2022-named | Named Entity and Relation Extraction with Multi-Modal Retrieval | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.437/ | Wang, Xinyu and Cai, Jiong and Jiang, Yong and Xie, Pengjun and Tu, Kewei and Lu, Wei | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5925--5936 | Multi-modal named entity recognition (NER) and relation extraction (RE) aim to leverage relevant image information to improve the performance of NER and RE. Most existing efforts largely focused on directly extracting potentially useful information from images (such as pixel-level features, identified objects, and associated captions).However, such extraction processes may not be knowledge aware, resulting in information that may not be highly relevant.In this paper, we propose a novel Multi-modal Retrieval based framework (MoRe).MoRe contains a text retrieval module and an image-based retrieval module, which retrieve related knowledge of the input text and image in the knowledge corpus respectively.Next, the retrieval results are sent to the textual and visual models respectively for predictions.Finally, a Mixture of Experts (MoE) module combines the predictions from the two models to make the final decision.Our experiments show that both our textual model and visual model can achieve state-of-the-art performance on four multi-modal NER datasets and one multi-modal RE dataset.With MoE, the model performance can be further improved and our analysis demonstrates the benefits of integrating both textual and visual cues for such tasks. | null | null | 10.18653/v1/2022.findings-emnlp.437 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,949 |
inproceedings | dong-etal-2022-calibrating | Calibrating Factual Knowledge in Pretrained Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.438/ | Dong, Qingxiu and Dai, Damai and Song, Yifan and Xu, Jingjing and Sui, Zhifang and Li, Lei | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5937--5947 | Previous literature has proved that Pretrained Language Models (PLMs) can store factual knowledge. However, we find that facts stored in the PLMs are not always correct. It motivates us to explore a fundamental question: How do we calibrate factual knowledge in PLMs without re-training from scratch? In this work, we propose a simple and lightweight method CaliNet to achieve this goal. To be specific, we first detect whether PLMs can learn the right facts via a contrastive score between right and fake facts. If not, we then use a lightweight method to add and adapt new parameters to specific factual texts. Experiments on the knowledge probing task show the calibration effectiveness and efficiency. In addition, through closed-book question answering, we find that the calibrated PLM possesses knowledge generalization ability after finetuning.Beyond the calibration performance, we further investigate and visualize the knowledge calibration mechanism. | null | null | 10.18653/v1/2022.findings-emnlp.438 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,950 |
inproceedings | chen-etal-2022-mcpg | {MCPG}: A Flexible Multi-Level Controllable Framework for Unsupervised Paraphrase Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.439/ | Chen, Yi and Jiang, Haiyun and Liu, Lemao and Wang, Rui and Shi, Shuming and Xu, Ruifeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5948--5958 | We present MCPG: a simple and effectiveapproach for controllable unsupervised paraphrase generation, which is also flexible toadapt to specific domains without extra training. MCPG is controllable in different levels: local lexicons, global semantics, and universal styles. The unsupervised paradigm ofMCPG combines factual keywords and diversified semantic embeddings as local lexical andglobal semantic constraints. The semantic embeddings are diversified by standard dropout,which is exploited for the first time to increaseinference diversity by us. Moreover, MCPGis qualified with good domain adaptability byadding a transfer vector as a universal style constraint, which is refined from the exemplars retrieved from the corpus of the target domain in atraining-free way. Extensive experiments showthat MCPG outperforms state-of-the-art unsupervised baselines by a margin. Meanwhile,our domain-adapted MCPG also achieves competitive performance with strong supervisedbaselines even without training. | null | null | 10.18653/v1/2022.findings-emnlp.439 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,951 |
inproceedings | yao-etal-2022-wordties | {W}ord{T}ies: Measuring Word Associations in Language Models via Constrained Sampling | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.440/ | Yao, Peiran and Renwick, Tobias and Barbosa, Denilson | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5959--5970 | Word associations are widely used in psychology to provide insights on how humans perceive and understand concepts. Comparing word associations in language models (LMs) to those generated by human subjects can serve as a proxy to uncover embedded lexical and commonsense knowledge in language models. While much helpful work has been done applying direct metrics, such as cosine similarity, to help understand latent spaces, these metrics are symmetric, while human word associativity is asymmetric. We propose WordTies, an algorithm based on constrained sampling from LMs, which allows an asymmetric measurement of associated words, given a cue word as the input. Comparing to existing methods, word associations found by this method share more overlap with associations provided by humans, and observe the asymmetric property of human associations. To examine possible reasons behind associations, we analyze the knowledge and reasoning behind the word pairings as they are linked to lexical and commonsense knowledge graphs.When the knowledge about the nature of the word pairings is combined with a probability that the LM has learned that information, we have a new way to examine what information is captured in LMs. | null | null | 10.18653/v1/2022.findings-emnlp.440 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,952 |
inproceedings | awadalla-etal-2022-exploring | Exploring The Landscape of Distributional Robustness for Question Answering Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.441/ | Awadalla, Anas and Wortsman, Mitchell and Ilharco, Gabriel and Min, Sewon and Magnusson, Ian and Hajishirzi, Hannaneh and Schmidt, Ludwig | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5971--5987 | We conduct a large empirical evaluation to investigate the landscape of distributional robustness in question answering. Our investigation spans over 350 models and 16 question answering datasets, including a diverse set of architectures, model sizes, and adaptation methods (e.g., fine-tuning, adapter tuning, in-context learning, etc.). We find that, in many cases, model variations do not affect robustness and in-distribution performance alone determines out-of-distribution performance.Moreover, our findings indicate thati) zero-shot and in-context learning methods are more robust to distribution shifts than fully fine-tuned models;ii) few-shot prompt fine-tuned models exhibit better robustness than few-shot fine-tuned span prediction models;iii) parameter-efficient and robustness enhancing training methods provide no significant robustness improvements.In addition, we publicly release all evaluations to encourage researchers to further analyze robustness trends for question answering models. | null | null | 10.18653/v1/2022.findings-emnlp.441 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,953 |
inproceedings | zhao-etal-2022-collaborative | Collaborative Reasoning on Multi-Modal Semantic Graphs for Video-Grounded Dialogue Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.442/ | Zhao, Xueliang and Wang, Yuxuan and Tao, Chongyang and Wang, Chenshuo and Zhao, Dongyan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5988--5998 | We study video-grounded dialogue generation, where a response is generated based on the dialogue context and the associated video. The primary challenges of this task lie in (1) the difficulty of integrating video data into pre-trained language models (PLMs) which presents obstacles to exploiting the power of large-scale pre-training; and (2) the necessity of taking into account the complementarity of various modalities throughout the reasoning process. Although having made remarkable progress in video-grounded dialogue generation, existing methods still fall short when it comes to integrating with PLMs in a way that allows information from different modalities to complement each other. To alleviate these issues, we first propose extracting pertinent information from videos and turning it into reasoning paths that are acceptable to PLMs. Additionally, we propose a multi-agent reinforcement learning method to collaboratively perform reasoning on different modalities (i.e., video and dialogue context). Empirical experiment results on two public datasets indicate that the proposed model can significantly outperform state-of-the-art models by large margins on both automatic and human evaluations. | null | null | 10.18653/v1/2022.findings-emnlp.442 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,954 |
inproceedings | mittal-etal-2022-partitioned | Partitioned Gradient Matching-based Data Subset Selection for Compute-Efficient Robust {ASR} Training | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.443/ | Mittal, Ashish and Sivasubramanian, Durga and Iyer, Rishabh and Jyothi, Preethi and Ramakrishnan, Ganesh | Findings of the Association for Computational Linguistics: EMNLP 2022 | 5999--6010 | Training state-of-the-art ASR systems such as RNN-T often has a high associated financial and environmental cost. Training with a subset of training data could mitigate this problem if the subset selected could achieve on-par performance with training with the entire dataset. Although there are many data subset selection(DSS) algorithms, direct application to the RNN-T is difficult, especially the DSS algorithms that are adaptive and use learning dynamics such as gradients, as RNN-T tend to have gradients with a significantly larger memory footprint. In this paper, we propose Partitioned Gradient Matching (PGM) a novel distributable DSS algorithm, suitable for massive datasets like those used to train RNN-T. Through extensive experiments on Librispeech 100H and Librispeech 960H, we show that PGM achieves between 3x to 6x speedup with only a very small accuracy degradation (under 1{\%} absolute WER difference). In addition, we demonstrate similar results for PGM even in settings where the training data is corrupted with noise. | null | null | 10.18653/v1/2022.findings-emnlp.443 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,955 |
inproceedings | zhu-etal-2022-adaptive | Adaptive Graph Convolutional Network for Knowledge Graph Entity Alignment | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.444/ | Zhu, Renbo and Luo, Xukun and Ma, Meng and Wang, Ping | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6011--6021 | Entity alignment (EA) aims to identify equivalent entities from different Knowledge Graphs (KGs), which is a fundamental task for integrating KGs. Throughout its development, Graph Convolutional Network (GCN) has become one of the mainstream methods for EA. These GCN-based methods learn the representations of entities from two KGs by message passing mechanism and then make alignments via measuring the similarity between entity embeddings. The key idea that GCN works in EA is that entities with similar neighbor structures are highly likely to be aligned. However, the noisy neighbors of entities transfer invalid information, drown out equivalent information, lead to inaccurate entity embeddings, and finally reduce the performance of EA. Based on the Sinkhorn algorithm, we design a reliability measure for potential equivalent entities and propose Adaptive Graph Convolutional Network to deal with neighbor noises in GCN. During the training, the network dynamically updates the adaptive weights of relation triples to weaken the propagation of noises. While calculating entity similarity, it comprehensively considers the self-similarity and neighborhood similarity of the entity pair to alleviate the influence of noises. Furthermore, we design a straightforward but efficient strategy to construct pseudo alignments for unsupervised EA. Extensive experiments on benchmark datasets demonstrate that our framework outperforms the state-of-the-art methods in both supervised and unsupervised settings. | null | null | 10.18653/v1/2022.findings-emnlp.444 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,956 |
inproceedings | aggarwal-etal-2022-towards | Towards Robust {NLG} Bias Evaluation with Syntactically-diverse Prompts | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.445/ | Aggarwal, Arshiya and Sun, Jiao and Peng, Nanyun | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6022--6032 | We present a robust methodology for evaluating biases in natural language generation(NLG) systems. Previous works use fixed hand-crafted prefix templates with mentions of various demographic groups to prompt models to generate continuations for bias analysis. These fixed prefix templates could themselves be specific in terms of styles or linguistic structures, which may lead to unreliable fairness conclusions that are not representative of the general trends from tone varying prompts. To study this problem, we paraphrase the prompts with different syntactic structures and use these to evaluate demographic bias in NLG systems. Our results suggest similar overall bias trends but some syntactic structures lead to contradictory conclusions compared to past works. We show that our methodology is more robust and that some syntactic structures prompt more toxic content while others could prompt less biased generation. This suggests the importance of not relying on a fixed syntactic structure and using tone-invariant prompts. Introducing syntactically-diverse prompts can achieve more robust NLG (bias) evaluation. | null | null | 10.18653/v1/2022.findings-emnlp.445 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,957 |
inproceedings | yang-etal-2022-pcmsp | {P}c{MSP}: A Dataset for Scientific Action Graphs Extraction from Polycrystalline Materials Synthesis Procedure Text | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.446/ | Yang, Xianjun and Zhuo, Ya and Zuo, Julia and Zhang, Xinlu and Wilson, Stephen and Petzold, Linda | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6033--6046 | Scientific action graphs extraction from materials synthesis procedures is important for reproducible research, machine automation, and material prediction. But the lack of annotated data has hindered progress in this field. We demonstrate an effort to annotate Polycrystalline Materials Synthesis Procedures PcMSP from 305 open access scientific articles for the construction of synthesis action graphs. This is a new dataset for material science information extraction that simultaneously contains the synthesis sentences extracted from the experimental paragraphs, as well as the entity mentions and intra-sentence relations. A two-step human annotation and inter-annotator agreement study guarantee the high quality of the PcMSP corpus. We introduce four natural language processing tasks: sentence classification, named entity recognition, relation classification, and joint extraction of entities and relations. Comprehensive experiments validate the effectiveness of several state-of-the-art models for these challenges while leaving large space for improvement. We also perform the error analysis and point out some unique challenges that require further investigation. We will release our annotation scheme, the corpus, and codes to the research community to alleviate the scarcity of labeled data in this domain. | null | null | 10.18653/v1/2022.findings-emnlp.446 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,958 |
inproceedings | kwak-etal-2022-validity | Validity Assessment of Legal Will Statements as Natural Language Inference | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.447/ | Kwak, Alice and Israelsen, Jacob and Morrison, Clayton and Bambauer, Derek and Surdeanu, Mihai | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6047--6056 | This work introduces a natural language inference (NLI) dataset that focuses on the validity of statements in legal wills. This dataset is unique because: (a) each entailment decision requires three inputs: the statement from the will, the law, and the conditions that hold at the time of the testator`s death; and (b) the included texts are longer than the ones in current NLI datasets. We trained eight neural NLI models in this dataset. All the models achieve more than 80{\%} macro F1 and accuracy, which indicates that neural approaches can handle this task reasonably well. However, group accuracy, a stricter evaluation measure that is calculated with a group of positive and negative examples generated from the same statement as a unit, is in mid 80s at best, which suggests that the models' understanding of the task remains superficial. Further ablative analyses and explanation experiments indicate that all three text segments are used for prediction, but some decisions rely on semantically irrelevant tokens. This indicates that overfitting on these longer texts likely happens, and that additional research is required for this task to be solved. | null | null | 10.18653/v1/2022.findings-emnlp.447 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,959 |
inproceedings | chen-etal-2022-adaprompt | {A}da{P}rompt: Adaptive Model Training for Prompt-based {NLP} | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.448/ | Chen, Yulong and Liu, Yang and Dong, Li and Wang, Shuohang and Zhu, Chenguang and Zeng, Michael and Zhang, Yue | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6057--6068 | Prompt-based learning, with its capability to tackle zero-shot and few-shot NLP tasks, has gained much attention in the community.The main idea is to bridge the gap between NLP downstream tasks and language modeling (LM), by mapping these tasks into natural language prompts, which are then filled by pre-trained language models (PLMs).However, for prompt learning, there are still two salient gaps between NLP tasks and pretraining.First, prompt information is not necessarily sufficiently present during LM pre-training. Second, task-specific data are not necessarily well represented during pre-training. We address these two issues by proposing AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs by making use of both task and prompt characteristics. In addition, we make use of knowledge in Natural Language Inference models for deriving adaptive verbalizers.Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings. In addition, in zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35{\%} relative error reduction. | null | null | 10.18653/v1/2022.findings-emnlp.448 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,960 |
inproceedings | liu-etal-2022-code | Code Generation From Flowcharts with Texts: A Benchmark Dataset and An Approach | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.449/ | Liu, Zejie and Hu, Xiaoyu and Zhou, Deyu and Li, Lin and Zhang, Xu and Xiang, Yanzheng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6069--6077 | Currently, researchers focus on generating codes from the requirement documents. However, current approaches still perform poorly on some requirements needing complex problem-solving skills. In reality, to tackle such complex requirements, instead of directly translating requirement documents into codes, software engineers write codes via unified modeling language diagrams, such as flowcharts, an intermediate tool to analyze and visualize the system. Therefore, we propose a new source code generation task, that is, to generate source code from flowcharts with texts. We manually construct a benchmark dataset containing 320 flowcharts with their corresponding source codes. Obviously, it is not straightforward to employ the current approaches for the new source code generation task since (1) the flowchart is a graph that contains various structures, including loop, selection, and others which is different from texts; (2) the connections between nodes in the flowchart are abundant and diverse which need to be carefully handled. To solve the above problems, we propose a two-stage code generation model. In the first stage, a structure recognition algorithm is employed to transform the flowchart into pseudo-code containing the structural conventions of a typical programming language such as while, if. In the second stage, a code generation model is employed to convert the pseudo-code into code. Experimental results show that the proposed approach can achieve some improvement over the baselines. | null | null | 10.18653/v1/2022.findings-emnlp.449 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,961 |
inproceedings | zhou-etal-2022-focus | Focus! Relevant and Sufficient Context Selection for News Image Captioning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.450/ | Zhou, Mingyang and Luo, Grace and Rohrbach, Anna and Yu, Zhou | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6078--6088 | News Image Captioning requires describing an image by leveraging additional context derived from a news article. Previous works only coarsely leverage the article to extract the necessary context, which makes it challenging for models to identify relevant events and named entities. In our paper, we first demonstrate that by combining more fine-grained context that captures the key named entities (obtained via an oracle) and the global context that summarizes the news, we can dramatically improve the model`s ability to generate accurate news captions. This begs the question, how to automatically extract such key entities from an image? We propose to use pre-trained vision and language retrieval model CLIP to localize the visually grounded entities in the news article, and then capture the non-visual entities via a open relation extraction model. Our experiments demonstrate that by simply selecting better context from the article, we can significantly improve the performance of existing models and achieve the new state-of-the-art performance on multiple benchmarks. | null | null | 10.18653/v1/2022.findings-emnlp.450 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,962 |
inproceedings | peper-wang-2022-generative | Generative Aspect-Based Sentiment Analysis with Contrastive Learning and Expressive Structure | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.451/ | Peper, Joseph and Wang, Lu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6089--6095 | Generative models have demonstrated impressive results on Aspect-based Sentiment Analysis (ABSA) tasks, particularly for the emerging task of extracting Aspect-Category-Opinion-Sentiment (ACOS) quadruples. However, these models struggle with implicit sentiment expressions, which are commonly observed in opinionated content such as online reviews. In this work, we introduce GEN-SCL-NAT, which consists of two techniques for improved structured generation for ACOS quadruple extraction. First, we propose GEN-SCL, a supervised contrastive learning objective that aids quadruple prediction by encouraging the model to produce input representations that are discriminable across key input attributes, such as sentiment polarity and the existence of implicit opinions and aspects. Second, we introduce GEN-NAT, a new structured generation format that better adapts pre-trained autoregressive encoder-decoder models to extract quadruples in a generative fashion. Experimental results show that GEN-SCL-NAT achieves top performance across three ACOS datasets, averaging 1.48{\%} F1 improvement, with a maximum 1.73{\%} increase on the LAPTOP-L1 dataset. Additionally, we see significant gains on implicit aspect and opinion splits that have been shown as challenging for existing ACOS approaches. | null | null | 10.18653/v1/2022.findings-emnlp.451 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,963 |
inproceedings | yang-tu-2022-semantic | Semantic Dependency Parsing with Edge {GNN}s | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.452/ | Yang, Songlin and Tu, Kewei | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6096--6102 | Second-order neural parsers have obtained high accuracy in semantic dependency parsing. Inspired by the factor graph representation of second-order parsing, we propose edge graph neural networks (E-GNNs). In an E-GNN, each node corresponds to a dependency edge, and the neighbors are defined in terms of sibling, co-parent, and grandparent relationships. We conduct experiments on SemEval 2015 Task 18 English datasets, showing the superior performance of E-GNNs. | null | null | 10.18653/v1/2022.findings-emnlp.452 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,964 |
inproceedings | yang-etal-2022-explore | Explore Unsupervised Structures in Pretrained Models for Relation Extraction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.453/ | Yang, Xi and Ji, Tao and Wu, Yuanbin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6103--6117 | Syntactic trees have been widely applied in relation extraction (RE). However, since parsing qualities are not stable on different text domains and a pre-defined grammar may not well fit the target relation schema, the introduction of syntactic structures sometimes fails to improve RE performances consistently. In this work, we study RE models with various unsupervised structures mined from pre-trained language models (e.g., BERT). We show that, similar to syntactic trees, unsupervised structures are quite informative for RE task: they are able to obtain competitive (even the best) performance scores on benchmark RE datasets (ACE05, WebNLG, SciERC). We also conduct detailed analyses on their abilities of adapting new RE domains and influence of noise links in those structures. The results suggest that unsupervised structures are reasonable alternatives of commonly used syntactic structures in relation extraction models. | null | null | 10.18653/v1/2022.findings-emnlp.453 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,965 |
inproceedings | mozes-etal-2022-identifying | Identifying Human Strategies for Generating Word-Level Adversarial Examples | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.454/ | Mozes, Maximilian and Kleinberg, Bennett and Griffin, Lewis | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6118--6126 | Adversarial examples in NLP are receiving increasing research attention. One line of investigation is the generation of word-level adversarial examples against fine-tuned Transformer models that preserve naturalness and grammaticality. Previous work found that human- and machine-generated adversarial examples are comparable in their naturalness and grammatical correctness. Most notably, humans were able to generate adversarial examples much more effortlessly than automated attacks. In this paper, we provide a detailed analysis of exactly how humans create these adversarial examples. By exploring the behavioural patterns of human workers during the generation process, we identify statistically significant tendencies based on which words humans prefer to select for adversarial replacement (e.g., word frequencies, word saliencies, sentiment) as well as where and when words are replaced in an input sequence. With our findings, we seek to inspire efforts that harness human strategies for more robust NLP models. | null | null | 10.18653/v1/2022.findings-emnlp.454 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,966 |
inproceedings | gautam-etal-2022-refinement | Refinement Matters: Textual Description Needs to be Refined for Zero-shot Learning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.455/ | Gautam, Chandan and Parameswaran, Sethupathy and Verma, Vinay and Sundaram, Suresh and Ramasamy, Savitha | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6127--6140 | Zero-Shot Learning (ZSL) has shown great promise at the intersection of vision and language, and generative methods for ZSL are predominant owing to their efficiency. Moreover, textual description or attribute plays a critical role in transferring knowledge from the seen to unseen classes in ZSL. Such generative approaches for ZSL are very costly to train and require the class description of the unseen classes during training. In this work, we propose a non-generative gating-based attribute refinement network for ZSL, which achieves similar accuracies to generative methods of ZSL, at a much lower computational cost. The refined attributes are mapped into the visual domain through an attribute embedder, and the whole network is guided by the circle loss and the well-known softmax cross-entropy loss to obtain a robust class embedding. We refer to our approach as Circle loss guided gating-based Attribute-Refinement Network (CARNet). We perform extensive experiments on the five benchmark datasets over the various challenging scenarios viz., Generalized ZSL (GZSL), Continual GZSL (CGZSL), and conventional ZSL. We observe that the CARNet significantly outperforms recent non-generative ZSL methods and most generative ZSL methods in all three settings by a significant margin. Our extensive ablation study disentangles the performance of various components and justifies their importance. The source code is available at https://github.com/Sethup123/CARNet. | null | null | 10.18653/v1/2022.findings-emnlp.455 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,967 |
inproceedings | chen-etal-2022-sat | {SAT}: Improving Semi-Supervised Text Classification with Simple Instance-Adaptive Self-Training | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.456/ | Chen, Hui and Han, Wei and Poria, Soujanya | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6141--6146 | Self-training methods have been explored in recent years and have exhibited great performance in improving semi-supervised learning. This work presents a simple instance-adaptive self-training method (SAT) for semi-supervised text classification. SAT first generates two augmented views for each unlabeled data, and then trains a meta learner to automatically identify the relative strength of augmentations based on the similarity between the original view and the augmented views. The weakly-augmented view is fed to the model to produce a pseudo-label and the strongly-augmented view is used to train the model to predict the same pseudo-label. We conducted extensive experiments and analyses on three text classification datasets and found that with varying sizes of labeled training data, SAT consistently shows competitive performance compared to existing semi-supervised learning methods. | null | null | 10.18653/v1/2022.findings-emnlp.456 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,968 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.