entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | liscio-etal-2022-cross | Cross-Domain Classification of Moral Values | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.findings-naacl.209/ | Liscio, Enrico and Dondera, Alin E. and Gead{\u{a}}u, Andrei and Jonker, Catholijn M. and Murukannaiah, Pradeep K. | Findings of the Association for Computational Linguistics: NAACL 2022 | 2727--2745 | Moral values influence how we interpret and act upon the information we receive. Identifying human moral values is essential for artificially intelligent agents to co-exist with humans. Recent progress in natural language processing allows the identification of moral values in textual discourse. However, domain-specific moral rhetoric poses challenges for transferring knowledge from one domain to another. We provide the first extensive investigation on the effects of cross-domain classification of moral values from text. We compare a state-of-the-art deep learning model (BERT) in seven domains and four cross-domain settings. We show that a value classifier can generalize and transfer knowledge to novel domains, but it can introduce catastrophic forgetting. We also highlight the typical classification errors in cross-domain value classification and compare the model predictions to the annotators agreement. Our results provide insights to computer and social scientists that seek to identify moral rhetoric specific to a domain of discourse. | null | null | 10.18653/v1/2022.findings-naacl.209 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,466 |
inproceedings | feng-etal-2022-efficient | Efficient Entity Embedding Construction from Type Knowledge for {BERT} | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.1/ | Feng, Yukun and Fayazi, Amir and Rastogi, Abhinav and Okumura, Manabu | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 1--10 | Recent work has shown advantages of incorporating knowledge graphs (KGs) into BERT for various NLP tasks. One common way is to feed entity embeddings as an additional input during pre-training. There are two limitations to such a method. First, to train the entity embeddings to include rich information of factual knowledge, it typically requires access to the entire KG. This is challenging for KGs with daily changes (e.g., Wikidata). Second, it requires a large scale pre-training corpus with entity annotations and high computational cost during pre-training. In this work, we efficiently construct entity embeddings only from the type knowledge, that does not require access to the entire KG. Although the entity embeddings contain only local information, they perform very well when combined with context. Furthermore, we show that our entity embeddings, constructed from BERT`s input embeddings, can be directly incorporated into the fine-tuning phase without requiring any specialized pre-training. In addition, these entity embeddings can also be constructed on the fly without requiring a large memory footprint to store them. Finally, we propose task-specific models that incorporate our entity embeddings for entity linking, entity typing, and relation classification. Experiments show that our models have comparable or superior performance to existing models while being more resource efficient. | null | null | 10.18653/v1/2022.findings-aacl.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,468 |
inproceedings | lou-etal-2022-spa | Spa: On the Sparsity of Virtual Adversarial Training for Dependency Parsing | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.2/ | Lou, Chao and Han, Wenjuan and Tu, Kewei | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 11--21 | Virtual adversarial training (VAT) is a powerful approach to improving robustness and performance, leveraging both labeled and unlabeled data to compensate for the scarcity of labeled data. It is adopted on lots of vision and language classification tasks. However, for tasks with structured output (e.g., dependency parsing), the application of VAT is nontrivial due to the intrinsic proprieties of structures: (1) the non-sparse problem and (2) exponential complexity. Against this background, we propose the Sparse Parse Adjustment (spa) algorithm and successfully applied VAT to the dependency parsing task. spa refers to the learning algorithm which combines the graph-based dependency parsing model with VAT in an exact computational manner and enhances the dependency parser with controllable and adjustable sparsity. Empirical studies show that the TreeCRF parser optimized using outperforms other methods without sparsity regularization. | null | null | 10.18653/v1/2022.findings-aacl.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,469 |
inproceedings | dabre-sukhoo-2022-kreolmorisienmt | {K}reol{M}orisien{MT}: A Dataset for Mauritian Creole Machine Translation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.3/ | Dabre, Raj and Sukhoo, Aneerav | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 22--29 | In this paper, we describe KreolMorisienMT, a dataset for benchmarking machine translation quality of Mauritian Creole. Mauritian Creole (Kreol Morisien) is a French-based creole and a lingua franca of the Republic of Mauritius. KreolMorisienMT consists of a parallel corpus between English and Kreol Morisien, French and Kreol Morisien and a monolingual corpus for Kreol Morisien. We first give an overview of Kreol Morisien and then describe the steps taken to create the corpora. Thereafter, we benchmark Kreol Morisien {\ensuremath{\leftrightarrow}} English and Kreol Morisien {\ensuremath{\leftrightarrow}} French models leveraging pre-trained models and multilingual transfer learning. Human evaluation reveals our systems' high translation quality. | null | null | 10.18653/v1/2022.findings-aacl.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,470 |
inproceedings | sicilia-alikhani-2022-leather | {LEATHER}: A Framework for Learning to Generate Human-like Text in Dialogue | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.4/ | Sicilia, Anthony and Alikhani, Malihe | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 30--53 | Algorithms for text-generation in dialogue can be misguided. For example, in task-oriented settings, reinforcement learning that optimizes only task-success can lead to abysmal lexical diversity. We hypothesize this is due to poor theoretical understanding of the objectives in text-generation and their relation to the learning process (i.e., model training). To this end, we propose a new theoretical framework for learning to generate text in dialogue. Compared to existing theories of learning, our framework allows for analysis of the multi-faceted goals inherent to text-generation. We use our framework to develop theoretical guarantees for learners that adapt to unseen data. As an example, we apply our theory to study data-shift within a cooperative learning algorithm proposed for the GuessWhat?! visual dialogue game. From this insight, we propose a new algorithm, and empirically, we demonstrate our proposal improves both task-success and human-likeness of the generated text. Finally, we show statistics from our theory are empirically predictive of multiple qualities of the generated dialogue, suggesting our theory is useful for model-selection when human evaluations are not available. | null | null | 10.18653/v1/2022.findings-aacl.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,471 |
inproceedings | gaci-etal-2022-conceptual | Conceptual Similarity for Subjective Tags | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.5/ | Gaci, Yacine and Benatallah, Boualem and Casati, Fabio and Benabdeslem, Khalid | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 54--66 | Tagging in the context of online resources is a fundamental addition to search systems. Tags assist with the indexing, management, and retrieval of online products and services to answer complex user queries. Traditional methods of matching user queries with tags either rely on cosine similarity, or employ semantic similarity models that fail to recognize conceptual connections between tags, e.g. ambiance and music. In this work, we focus on subjective tags which characterize subjective aspects of a product or service. We propose conceptual similarity to leverage conceptual awareness when assessing similarity between tags. We also provide a simple cost-effective pipeline to automatically generate data in order to train the conceptual similarity model. We show that our pipeline generates high-quality datasets, and evaluate the similarity model both systematically and on a downstream application. Experiments show that conceptual similarity outperforms existing work when using subjective tags. | null | null | 10.18653/v1/2022.findings-aacl.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,472 |
inproceedings | sahu-2022-taskmix | {T}ask{M}ix: Data Augmentation for Meta-Learning of Spoken Intent Understanding | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.6/ | Sahu, Surya Kant | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 67--72 | Meta-Learning has emerged as a research direction to better transfer knowledge from related tasks to unseen but related tasks. However, Meta-Learning requires many training tasks to learn representations that transfer well to unseen tasks; otherwise, it leads to overfitting, and the performance degenerates to worse than Multi-task Learning. We show that a state-of-the-art data augmentation method worsens this problem of overfitting when the task diversity is low. We propose a simple method, TaskMix, which synthesizes new tasks by linearly interpolating existing tasks. We compare TaskMix against many baselines on an in-house multilingual intent classification dataset of N-Best ASR hypotheses derived from real-life human-machine telephony utterances and two datasets derived from MTOP. We show that TaskMix outperforms baselines, alleviates overfitting when task diversity is low, and does not degrade performance even when it is high. | null | null | 10.18653/v1/2022.findings-aacl.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,473 |
inproceedings | chen-van-deemter-2022-understanding | Understanding the Use of Quantifiers in {M}andarin | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.7/ | Chen, Guanyi and van Deemter, Kees | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 73--80 | We introduce a corpus of short texts in Mandarin, in which quantified expressions figure prominently. We illustrate the significance of the corpus by examining the hypothesis (known as Huang`s {\textquotedblleft}coolness{\textquotedblright} hypothesis) that speakers of East Asian Languages tend to speak more briefly but less informatively than, for example, speakers of West-European languages. The corpus results from an elicitation experiment in which participants were asked to describe abstract visual scenes. We compare the resulting corpus, called MQTUNA, with an English corpus that was collected using the same experimental paradigm. The comparison reveals that some, though not all, aspects of quantifier use support the above-mentioned hypothesis. Implications of these findings for the generation of quantified noun phrases are discussed. | null | null | 10.18653/v1/2022.findings-aacl.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,474 |
inproceedings | shen-etal-2022-representational | Does Representational Fairness Imply Empirical Fairness? | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.8/ | Shen, Aili and Han, Xudong and Cohn, Trevor and Baldwin, Timothy and Frermann, Lea | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 81--95 | NLP technologies can cause unintended harms if learned representations encode sensitive attributes of the author, or predictions systematically vary in quality across groups. Popular debiasing approaches, like adversarial training, remove sensitive information from representations in order to reduce disparate performance, however the relation between representational fairness and empirical (performance) fairness has not been systematically studied. This paper fills this gap, and proposes a novel debiasing method building on contrastive learning to encourage a latent space that separates instances based on target label, while mixing instances that share protected attributes. Our results show the effectiveness of our new method and, more importantly, show across a set of diverse debiasing methods that \textit{representational fairness does not imply empirical fairness}. This work highlights the importance of aligning and understanding the relation of the optimization objective and final fairness target. | null | null | 10.18653/v1/2022.findings-aacl.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,475 |
inproceedings | jiang-etal-2022-sehy | {SEHY}: A Simple yet Effective Hybrid Model for Summarization of Long Scientific Documents | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.9/ | Jiang, Zhihua and Yang, Junzhan and Rao, Dongning | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 96--106 | Long document (e.g., scientific papers) summarization is obtaining more and more attention in recent years. Extractive approaches attempt to choose salient sentences via understanding the whole document, but long documents cover numerous subjects with varying details and will not ease content understanding. Instead, abstractive approaches elaborate to generate related tokens while suffering from truncating the source document due to their input sizes. To this end, we propose a Simple yet Effective HYbrid approach, which we call SEHY, that exploits the discourse information of a document to select salient sections instead sentences for summary generation. On the one hand, SEHY avoids the full-text understanding; on the other hand, it retains salient information given the length limit. In particular, we design two simple strategies for training the extractor: extracting sections incrementally and based on salience-analysis. Then, we use strong abstractive models to generate the final summary. We evaluate our approach on a large-scale scientific paper dataset: arXiv. Further, we discuss how the disciplinary class (e.g., computer science, math or physics) of a scientific paper affects the performance of SEHY as its writing style indicates, which is unexplored yet in existing works. Experimental results show the effectiveness of our approach and interesting findings on arXiv and its subsets generated in this paper. | null | null | 10.18653/v1/2022.findings-aacl.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,476 |
inproceedings | bao-etal-2022-plato | {PLATO}-{XL}: Exploring the Large-scale Pre-training of Dialogue Generation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.10/ | Bao, Siqi and He, Huang and Wang, Fan and Wu, Hua and Wang, Haifeng and Wu, Wenquan and Wu, Zhihua and Guo, Zhen and Lu, Hua and Huang, Xinxian and Tian, Xin and Xu, Xinchao and Lin, Yingzhan and Niu, Zheng-Yu | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 107--118 | To explore the limit of dialogue generation pre-training, we present the models of PLATO-XL with up to 11 billion parameters, trained on both Chinese and English social media conversations. To train such large models, we adopt the architecture of unified transformer with high computation and parameter efficiency. In addition, we carry out multi-party aware pre-training to better distinguish the characteristic information in social media conversations. With such designs, PLATO-XL successfully achieves superior performances as compared to other approaches in both Chinese and English chitchat. We further explore the capacity of PLATO-XL on other conversational tasks, such as knowledge grounded dialogue and task-oriented conversation. The experimental results indicate that PLATO-XL obtains state-of-the-art results across multiple conversational tasks, verifying its potential as a foundation model of conversational AI. | null | null | 10.18653/v1/2022.findings-aacl.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,477 |
inproceedings | trye-etal-2022-hybrid | A Hybrid Architecture for Labelling Bilingual {M}{\={a}}ori-{E}nglish Tweets | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.11/ | Trye, David and Yogarajan, Vithya and K{\"onig, Jemma and Keegan, Te Taka and Bainbridge, David and Apperley, Mark | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 119--130 | Most large-scale language detection tools perform poorly at identifying M{\={a}}ori text. Moreover, rule-based and machine learning-based techniques devised specifically for the M{\={a}}ori-English language pair struggle with interlingual homographs. We develop a hybrid architecture that couples M{\={a}}ori-language orthography with machine learning models in order to annotate mixed M{\={a}}ori-English text. This architecture is used to label a new bilingual Twitter corpus at both the token (word) and tweet (sentence) levels. We use the collected tweets to show that the hybrid approach outperforms existing systems with respect to language detection of interlingual homographs and overall accuracy. We also evaluate its performance on out-of-domain data. Two interactive visualisations are provided for exploring the Twitter corpus and comparing errors across the new and existing techniques. The architecture code and visualisations are available online, and the corpus is available on request. | null | null | 10.18653/v1/2022.findings-aacl.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,478 |
inproceedings | obamuyide-johnston-2022-meta | Meta-Learning Adaptive Knowledge Distillation for Efficient Biomedical Natural Language Processing | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.12/ | Obamuyide, Abiola and Johnston, Blair | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 131--137 | There has been an increase in the number of large and high-performing models made available for various biomedical natural language processing tasks. While these models have demonstrated impressive performance on various biomedical tasks, their training and run-time costs can be computationally prohibitive. This work investigates the use of knowledge distillation, a common model compression method, to reduce the size of large models for biomedical natural language processing. We further improve the performance of knowledge distillation methods for biomedical natural language by proposing a meta-learning approach which adaptively learns parameters that enable the optimal rate of knowledge exchange between the teacher and student models from the distillation data during knowledge distillation. Experiments on two biomedical natural language processing tasks demonstrate that our proposed adaptive meta-learning approach to knowledge distillation delivers improved predictive performance over previous and recent state-of-the-art knowledge distillation methods. | null | null | 10.18653/v1/2022.findings-aacl.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,479 |
inproceedings | de-varda-marelli-2022-effects | The Effects of Surprisal across Languages: Results from Native and Non-native Reading | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.13/ | de Varda, Andrea and Marelli, Marco | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 138--144 | It is well known that the surprisal of an upcoming word, as estimated by language models, is a solid predictor of reading times (Smith and Levy, 2013). However, most of the studies that support this view are based on English and few other Germanic languages, leaving an open question as to the cross-lingual generalizability of such findings. Moreover, they tend to consider only the best-performing eye-tracking measure, which might conflate the effects of predictive and integrative processing. Furthermore, it is not clear whether prediction plays a role in non-native language processing in bilingual individuals (Gr{\"uter et al., 2014). We approach these problems at large scale, extracting surprisal estimates from mBERT, and assessing their psychometric predictive power on the MECO corpus, a cross-linguistic dataset of eye movement behavior in reading (Siegelman et al., 2022; Kuperman et al., 2020). We show that surprisal is a strong predictor of reading times across languages and fixation measurements, and that its effects in L2 are weaker with respect to L1. | null | null | 10.18653/v1/2022.findings-aacl.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,480 |
inproceedings | cho-etal-2022-assessing | Assessing How Users Display Self-Disclosure and Authenticity in Conversation with Human-Like Agents: A Case Study of Luda Lee | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.14/ | Cho, Won Ik and Kim, Soomin and Choi, Eujeong and Jeong, Younghoon | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 145--152 | There is an ongoing discussion on what makes humans more engaged when interacting with conversational agents. However, in the area of language processing, there has been a paucity of studies on how people react to agents and share interactions with others. We attack this issue by investigating the user dialogues with human-like agents posted online and aim to analyze the dialogue patterns. We construct a taxonomy to discern the users' self-disclosure in the dialogue and the communication authenticity displayed in the user posting. We annotate the in-the-wild data, examine the reliability of the proposed scheme, and discuss how the categorization can be utilized for future research and industrial development. | null | null | 10.18653/v1/2022.findings-aacl.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,481 |
inproceedings | bhushan-lee-2022-block | Block Diagram-to-Text: Understanding Block Diagram Images by Generating Natural Language Descriptors | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.15/ | Bhushan, Shreyanshu and Lee, Minho | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 153--168 | Block diagrams are very popular for representing a workflow or process of a model. Understanding block diagrams by generating summaries can be extremely useful in document summarization. It can also assist people in inferring key insights from block diagrams without requiring a lot of perceptual and cognitive effort. In this paper, we propose a novel task of converting block diagram images into text by presenting a framework called {\textquotedblleft}BloSum{\textquotedblright}. This framework extracts the contextual meaning from the images in the form of triplets that help the language model in summary generation. We also introduce a new dataset for complex computerized block diagrams, explain the dataset preparation process, and later analyze it. Additionally, to showcase the generalization of the model, we test our method with publicly available handwritten block diagram datasets. Our evaluation with different metrics demonstrates the effectiveness of our approach that outperforms other methods and techniques. | null | null | 10.18653/v1/2022.findings-aacl.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,482 |
inproceedings | ravuru-etal-2022-multi | Multi-Domain Dialogue State Tracking By Neural-Retrieval Augmentation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.16/ | Ravuru, Lohith and Ryu, Seonghan and Choi, Hyungtak and Yang, Haehun and Ko, Hyeonmok | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 169--175 | Dialogue State Tracking (DST) is a very complex task that requires precise understanding and information tracking of multi-domain conversations between users and dialogue systems. Many task-oriented dialogue systems use dialogue state tracking technology to infer users' goals from the history of the conversation. Existing approaches for DST are usually conditioned on previous dialogue states. However, the dependency on previous dialogues makes it very challenging to prevent error propagation to subsequent turns of a dialogue. In this paper, we propose Neural Retrieval Augmentation to alleviate this problem by creating a Neural Index based on dialogue context. Our NRA-DST framework efficiently retrieves dialogue context from the index built using a combination of unstructured dialogue state and structured user/system utterances. We explore a simple pipeline resulting in a retrieval-guided generation approach for training a DST model. Experiments on different retrieval methods for augmentation show that neural retrieval augmentation is the best performing retrieval method for DST. Our evaluations on the large-scale MultiWOZ dataset show that our model outperforms the baseline approaches. | null | null | 10.18653/v1/2022.findings-aacl.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,483 |
inproceedings | qi-etal-2022-takg | {T}a{KG}: A New Dataset for Paragraph-level Table-to-Text Generation Enhanced with Knowledge Graphs | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.17/ | Qi, Qianqian and Deng, Zhenyun and Zhu, Yonghua and Lee, Lia Jisoo and Witbrock, Michael and Liu, Jiamou | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 176--187 | We introduce TaKG, a new table-to-text generation dataset with the following highlights: (1) TaKG defines a long-text (paragraph-level) generation task as opposed to well-established short-text (sentence-level) generation datasets. (2) TaKG is the first large-scale dataset for this task, containing three application domains and {\textasciitilde}750,000 samples. (3) To address the divergence phenomenon, TaKG enhances table input using external knowledge graphs, extracted by a new Wikidata-based method. We then propose a new Transformer-based multimodal sequence-to-sequence architecture for TaKG that integrates two pretrained language models RoBERTa and GPT-2. Our model shows reliable performance on long-text generation across a variety of metrics, and outperforms existing models for short-text generation tasks. | null | null | 10.18653/v1/2022.findings-aacl.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,484 |
inproceedings | gao-etal-2022-revisiting | Revisiting Checkpoint Averaging for Neural Machine Translation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.18/ | Gao, Yingbo and Herold, Christian and Yang, Zijian and Ney, Hermann | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 188--196 | Checkpoint averaging is a simple and effective method to boost the performance of converged neural machine translation models. The calculation is cheap to perform and the fact that the translation improvement almost comes for free, makes it widely adopted in neural machine translation research. Despite the popularity, the method itself simply takes the mean of the model parameters from several checkpoints, the selection of which is mostly based on empirical recipes without many justifications. In this work, we revisit the concept of checkpoint averaging and consider several extensions. Specifically, we experiment with ideas such as using different checkpoint selection strategies, calculating weighted average instead of simple mean, making use of gradient information and fine-tuning the interpolation weights on development data. Our results confirm the necessity of applying checkpoint averaging for optimal performance, but also suggest that the landscape between the converged checkpoints is rather flat and not much further improvement compared to simple averaging is to be obtained. | null | null | 10.18653/v1/2022.findings-aacl.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,485 |
inproceedings | alacam-etal-2022-modeling | Modeling Referential Gaze in Task-oriented Settings of Varying Referential Complexity | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.19/ | Alacam, {\"Ozge and Ruppert, Eugen and Zarrie{\ss, Sina and Malhotra, Ganeshan and Biemann, Chris and Zarrie{\ss, Sina | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 197--210 | Referential gaze is a fundamental phenomenon for psycholinguistics and human-human communication. However, modeling referential gaze for real-world scenarios, e.g. for task-oriented communication, is lacking the well-deserved attention from the NLP community. In this paper, we address this challenging issue by proposing a novel multimodal NLP task; namely predicting when the gaze is referential. We further investigate how to model referential gaze and transfer gaze features to adapt to unseen situated settings that target different referential complexities than the training environment. We train (i) a sequential attention-based LSTM model and (ii) a multivariate transformer encoder architecture to predict whether the gaze is on a referent object. The models are evaluated on the three complexity datasets. The results indicate that the gaze features can be transferred not only among various similar tasks and scenes but also across various complexity levels. Taking the referential complexity of a scene into account is important for successful target prediction using gaze parameters especially when there is not much data for fine-tuning. | null | null | 10.18653/v1/2022.findings-aacl.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,486 |
inproceedings | han-etal-2022-automating | Automating Interlingual Homograph Recognition with Parallel Sentences | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.20/ | Han, Yi and Sasano, Ryohei and Takeda, Koichi | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 211--216 | Interlingual homographs are words that spell the same but possess different meanings across languages. Recognizing interlingual homographs from form-identical words generally needs linguistic knowledge and massive annotation work. In this paper, we propose an automatic interlingual homograph recognition method based on the cross-lingual word embedding similarity and co-occurrence of form-identical words in parallel sentences. We conduct experiments with various off-the-shelf language models coordinating with cross-lingual alignment operations and co-occurrence metrics on the Chinese-Japanese and English-Dutch language pairs. Experimental results demonstrate that our proposed method is able to make accurate and consistent predictions across languages. | null | null | 10.18653/v1/2022.findings-aacl.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,487 |
inproceedings | shekhar-etal-2022-coral | {C}o{RAL}: a Context-aware {C}roatian Abusive Language Dataset | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.21/ | Shekhar, Ravi and Karan, Vanja Mladen and Purver, Matthew | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 217--225 | In light of unprecedented increases in the popularity of the internet and social media, comment moderation has never been a more relevant task. Semi-automated comment moderation systems greatly aid human moderators by either automatically classifying the examples or allowing the moderators to prioritize which comments to consider first. However, the concept of inappropriate content is often subjective, and such content can be conveyed in many subtle and indirect ways. In this work, we propose CoRAL {--} a language and culturally aware Croatian Abusive dataset covering phenomena of implicitness and reliance on local and global context. We show experimentally that current models degrade when comments are not explicit and further degrade when language skill and context knowledge are required to interpret the comment. | null | null | 10.18653/v1/2022.findings-aacl.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,488 |
inproceedings | hirigoyen-etal-2022-copy | A Copy Mechanism for Handling Knowledge Base Elements in {SPARQL} Neural Machine Translation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.22/ | Hirigoyen, Rose and Zouaq, Amal and Reyd, Samuel | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 226--236 | Neural Machine Translation (NMT) models from English to SPARQL are a promising development for SPARQL query generation. However, current architectures are unable to integrate the knowledge base (KB) schema and handle questions on knowledge resources, classes, and properties unseen during training, rendering them unusable outside the scope of topics covered in the training set. Inspired by the performance gains in natural language processing tasks, we propose to integrate a copy mechanism for neural SPARQL query generation as a way to tackle this issue. We illustrate our proposal by adding a copy layer and a dynamic knowledge base vocabulary to two Seq2Seq architectures (CNNs and Transformers). This layer makes the models copy KB elements directly from the questions, instead of generating them. We evaluate our approach on state-of-the-art datasets, including datasets referencing unknown KB elements and measure the accuracy of the copy-augmented architectures. Our results show a considerable increase in performance on all datasets compared to non-copy architectures. | null | null | 10.18653/v1/2022.findings-aacl.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,489 |
inproceedings | buschbeck-etal-2022-multilingual | A Multilingual Multiway Evaluation Data Set for Structured Document Translation of {A}sian Languages | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.23/ | Buschbeck, Bianka and Dabre, Raj and Exel, Miriam and Huck, Matthias and Huy, Patrick and Rubino, Raphael and Tanaka, Hideki | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 237--245 | Translation of structured content is an important application of machine translation, but the scarcity of evaluation data sets, especially for Asian languages, limits progress. In this paper we present a novel multilingual multiway evaluation data set for the translation of structured documents of the Asian languages Japanese, Korean and Chinese. We describe the data set, its creation process and important characteristics, followed by establishing and evaluating baselines using the direct translation as well as detag-project approaches. Our data set is well suited for multilingual evaluation, and it contains richer annotation tag sets than existing data sets. Our results show that massively multilingual translation models like M2M-100 and mBART-50 perform surprisingly well despite not being explicitly trained to handle structured content. The data set described in this paper and used in our experiments is released publicly. | null | null | 10.18653/v1/2022.findings-aacl.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,490 |
inproceedings | dev-etal-2022-measures | On Measures of Biases and Harms in {NLP} | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.24/ | Dev, Sunipa and Sheng, Emily and Zhao, Jieyu and Amstutz, Aubrie and Sun, Jiao and Hou, Yu and Sanseverino, Mattie and Kim, Jiin and Nishi, Akihiro and Peng, Nanyun and Chang, Kai-Wei | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 246--267 | Recent studies show that Natural Language Processing (NLP) technologies propagate societal biases about demographic groups associated with attributes such as gender, race, and nationality. To create interventions and mitigate these biases and associated harms, it is vital to be able to detect and measure such biases. While existing works propose bias evaluation and mitigation methods for various tasks, there remains a need to cohesively understand the biases and the specific harms they measure, and how different measures compare with each other. To address this gap, this work presents a practical framework of harms and a series of questions that practitioners can answer to guide the development of bias measures. As a validation of our framework and documentation questions, we also present several case studies of how existing bias measures in NLP{---}both intrinsic measures of bias in representations and extrinsic measures of bias of downstream applications{---}can be aligned with different harms and how our proposed documentation questions facilitates more holistic understanding of what bias measures are measuring. | null | null | 10.18653/v1/2022.findings-aacl.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,491 |
inproceedings | jin-ataman-2022-logographic | Logographic Information Aids Learning Better Representations for Natural Language Inference | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.25/ | Jin, Zijian and Ataman, Duygu | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 268--273 | Statistical language models conventionally implement representation learning based on the contextual distribution of words or other formal units, whereas any information related to the logographic features of written text are often ignored, assuming they should be retrieved relying on the cooccurence statistics. On the other hand, as language models become larger and require more data to learn reliable representations, such assumptions may start to fall back, especially under conditions of data sparsity. Many languages, including Chinese and Vietnamese, use logographic writing systems where surface forms are represented as a visual organization of smaller graphemic units, which often contain many semantic cues. In this paper, we present a novel study which explores the benefits of providing language models with logographic information in learning better semantic representations. We test our hypothesis in the natural language inference (NLI) task by evaluating the benefit of computing multi-modal representations that combine contextual information with glyph information. Our evaluation results in six languages with different typology and writing systems suggest significant benefits of using multi-modal embeddings in languages with logograhic systems, especially for words with less occurence statistics. | null | null | 10.18653/v1/2022.findings-aacl.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,492 |
inproceedings | miyazaki-etal-2022-cross | Cross-domain Analysis on {J}apanese Legal Pretrained Language Models | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.26/ | Miyazaki, Keisuke and Yamada, Hiroaki and Tokunaga, Takenobu | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 274--281 | This paper investigates the pretrained language model (PLM) specialised in the Japanese legal domain. We create PLMs using different pretraining strategies and investigate their performance across multiple domains. Our findings are (i) the PLM built with general domain data can be improved by further pretraining with domain-specific data, (ii) domain-specific PLMs can learn domain-specific and general word meanings simultaneously and can distinguish them, (iii) domain-specific PLMs work better on its target domain; still, the PLMs retain the information learnt in the original PLM even after being further pretrained with domain-specific data, (iv) the PLMs sequentially pretrained with corpora of different domains show high performance for the later learnt domains. | null | null | 10.18653/v1/2022.findings-aacl.26 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,493 |
inproceedings | k-etal-2022-multilingual | Multilingual {C}heck{L}ist: Generation and Evaluation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.27/ | K, Karthikeyan and Bhatt, Shaily and Singh, Pankaj and Aditya, Somak and Dandapat, Sandipan and Sitaram, Sunayana and Choudhury, Monojit | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 282--295 | Multilingual evaluation benchmarks usually contain limited high-resource languages and do not test models for specific linguistic capabilities. CheckList is a template-based evaluation approach that tests models for specific capabilities. The CheckList template creation process requires native speakers, posing a challenge in scaling to hundreds of languages. In this work, we explore multiple approaches to generate Multilingual CheckLists. We device an algorithm {--}Template Extraction Algorithm (TEA) for automatically extracting target language CheckList templates from machine translated instances of a source language templates. We compare the TEA CheckLists with CheckLists created with different levels of human intervention. We further introduce metrics along the dimensions of cost, diversity, utility, and correctness to compare the CheckLists. We thoroughly analyze different approaches to creating CheckLists in Hindi. Furthermore, we experiment with 9 more different languages. We find that TEA followed by human verification is ideal for scaling Checklist-based evaluation to multiple languages while TEA gives a good estimates of model performance. We release the code of TEA and the CheckLists created at aka.ms/multilingualchecklist | null | null | 10.18653/v1/2022.findings-aacl.27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,494 |
inproceedings | liu-etal-2022-part | Part Represents Whole: Improving the Evaluation of Machine Translation System Using Entropy Enhanced Metrics | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.28/ | Liu, Yilun and Tao, Shimin and Su, Chang and Zhang, Min and Zhao, Yanqing and Yang, Hao | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 296--307 | Machine translation (MT) metrics often experience poor correlations with human assessments. In terms of MT system evaluation, most metrics pay equal attentions to every sample in an evaluation set, while in human evaluation, difficult sentences often make candidate systems distinguishable via notable fluctuations in human scores, especially when systems are competitive. We find that samples with high entropy values, which though usually count less than 5{\%}, tend to play a key role in MT evaluation: when the evaluation set is shrunk to only the high-entropy portion, correlations with human assessments are actually improved. Thus, in this paper, we propose a fast and unsupervised approach to enhance MT metrics using entropy, expanding the dimension of evaluation by introducing sentence-level difficulty. A translation hypothesis with a significantly high entropy value is considered difficult and receives a large weight in aggregation of system-level scores. Experimental results on five sub-tracks in the WMT19 Metrics shared tasks show that our proposed method significantly enhanced the performance of commonly-used MT metrics in terms of system-level correlations with human assessments, even outperforming existing SOTA metrics. In particular, all enhanced metrics exhibit overall stability in correlations with human assessments in circumstances where only competitive MT systems are included, while the corresponding vanilla metrics fail to correlate with human assessments. | null | null | 10.18653/v1/2022.findings-aacl.28 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,495 |
inproceedings | wu-etal-2022-memformer | Memformer: A Memory-Augmented Transformer for Sequence Modeling | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.29/ | Wu, Qingyang and Lan, Zhenzhong and Qian, Kun and Gu, Jing and Geramifard, Alborz and Yu, Zhou | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 308--318 | Transformers have reached remarkable success in sequence modeling. However, these models have efficiency issues as they need to store all the history token-level representations as memory. We present Memformer, an efficient neural network for sequence modeling, that utilizes an external dynamic memory to encode and retrieve past information. Our model achieves linear time complexity and constant memory space complexity when processing long sequences. We also propose a new optimization scheme, memory replay back-propagation (MRBP), which promotes long-range back-propagation through time with a significantly reduced memory requirement. Experimental results show that Memformer has achieved comparable performance compared against the baselines by using 8.1x less memory space and 3.2x faster on inference. Analysis of the attention pattern shows that our external memory slots can encode and retain important information through timesteps. | null | null | 10.18653/v1/2022.findings-aacl.29 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,496 |
inproceedings | fang-etal-2022-open | Open-Domain Conversational Question Answering with Historical Answers | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.30/ | Fang, Hung-Chieh and Hung, Kuo-Han and Huang, Chen-Wei and Chen, Yun-Nung | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 319--326 | Open-domain conversational question answering can be viewed as two tasks: passage retrieval and conversational question answering, where the former relies on selecting candidate passages from a large corpus and the latter requires better understanding of a question with contexts to predict the answers. This paper proposes ConvADR-QA that leverages historical answers to boost retrieval performance and further achieves better answering performance. Our experiments on the benchmark dataset, OR-QuAC, demonstrate that our model outperforms existing baselines in both extractive and generative reader settings, well justifying the effectiveness of historical answers for open-domain conversational question answering. | null | null | 10.18653/v1/2022.findings-aacl.30 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,497 |
inproceedings | tomonari-etal-2022-robustness | Robustness Evaluation of Text Classification Models Using Mathematical Optimization and Its Application to Adversarial Training | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.31/ | Tomonari, Hikaru and Nishino, Masaaki and Yamamoto, Akihiro | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 327--333 | Neural networks are known to be vulnerable to adversarial examples due to slightly perturbed input data. In practical applications of neural network models, the robustness of the models against perturbations must be evaluated. However, no method can strictly evaluate their robustness in natural language domains. We therefore propose a method that evaluates the robustness of text classification models using an integer linear programming (ILP) solver by an optimization problem that identifies a minimum synonym swap that changes the classification result. Our method allows us to compare the robustness of various models in realistic time. It can also be used for obtaining adversarial examples. Because of the minimal impact on the altered sentences, adversarial examples with our method obtained high scores in human evaluations of grammatical correctness and semantic similarity for an IMDb dataset. In addition, we implemented adversarial training with the IMDb and SST2 datasets and found that our adversarial training method makes the model robust. | null | null | 10.18653/v1/2022.findings-aacl.31 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,498 |
inproceedings | li-etal-2022-herb | {HERB}: Measuring Hierarchical Regional Bias in Pre-trained Language Models | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.32/ | Li, Yizhi and Zhang, Ge and Yang, Bohao and Lin, Chenghua and Ragni, Anton and Wang, Shi and Fu, Jie | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 334--346 | Fairness has become a trending topic in natural language processing (NLP) and covers biases targeting certain social groups such as genders and religions. Yet regional bias, another long-standing global discrimination problem, remains unexplored still. Consequently, we intend to provide a study to analyse the regional bias learned by the pre-trained language models (LMs) that are broadly used in NLP tasks. While verifying the existence of regional bias in LMs, we find that the biases on regional groups can be largely affected by the corresponding geographical clustering. We accordingly propose a hierarchical regional bias evaluation method (HERB) utilising the information from the sub-region clusters to quantify the bias in the pre-trained LMs. Experiments show that our hierarchical metric can effectively evaluate the regional bias with regard to comprehensive topics and measure the potential regional bias that can be propagated to downstream tasks. Our codes are available at \url{https://github.com/Bernard-Yang/HERB}. | null | null | 10.18653/v1/2022.findings-aacl.32 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,499 |
inproceedings | montariol-etal-2022-multilingual | Multilingual Auxiliary Tasks Training: Bridging the Gap between Languages for Zero-Shot Transfer of Hate Speech Detection Models | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.33/ | Montariol, Syrielle and Riabi, Arij and Seddah, Djam{\'e} | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 347--363 | Zero-shot cross-lingual transfer learning has been shown to be highly challenging for tasks involving a lot of linguistic specificities or when a cultural gap is present between lan- guages, such as in hate speech detection. In this paper, we highlight this limitation for hate speech detection in several domains and languages using strict experimental settings. Then, we propose to train on multilingual auxiliary tasks {--} sentiment analysis, named entity recognition, and tasks relying on syntactic information {--} to improve zero-shot transfer of hate speech detection models across languages. We show how hate speech detection models benefit from a cross-lingual knowledge proxy brought by auxiliary tasks fine-tuning and highlight these tasks' positive impact on bridging the hate speech linguistic and cultural gap between languages. | null | null | 10.18653/v1/2022.findings-aacl.33 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,500 |
inproceedings | oguz-etal-2022-chop | Chop and Change: Anaphora Resolution in Instructional Cooking Videos | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.34/ | Oguz, Cennet and Kruijff-Korbayova, Ivana and Vincent, Emmanuel and Denis, Pascal and van Genabith, Josef | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 364--374 | Linguistic ambiguities arising from changes in entities in action flows are a key challenge in instructional cooking videos. In particular, temporally evolving entities present rich and to date understudied challenges for anaphora resolution. For example {\textquotedblleft}oil{\textquotedblright} mixed with {\textquotedblleft}salt{\textquotedblright} is later referred to as a {\textquotedblleft}mixture{\textquotedblright}. In this paper we propose novel annotation guidelines to annotate recipes for the anaphora resolution task, reflecting change in entities. Moreover, we present experimental results for end-to-end multimodal anaphora resolution with the new annotation scheme and propose the use of temporal features for performance improvement. | null | null | 10.18653/v1/2022.findings-aacl.34 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,501 |
inproceedings | mondal-etal-2022-disabledonindiantwitter | {\textquotedblleft}{\#}{D}isabled{O}n{I}ndian{T}witter{\textquotedblright} : A Dataset towards Understanding the Expression of People with Disabilities on {I}ndian {T}witter | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.35/ | Mondal, Ishani and Kaur, Sukhnidh and Bali, Kalika and Vashistha, Aditya and Swaminathan, Manohar | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 375--386 | Twitter serves as a powerful tool for self-expression among the disabled people. To understand how disabled people in India use Twitter, we introduce a manually annotated corpus {\#}DisabledOnIndianTwitter comprising of 2,384 tweets posted by 27 female and 15 male users. These users practice diverse professions and engage in varied online discourses on disability in India. To examine patterns in their Twitter use, we propose a novel hierarchical annotation taxonomy to classify the tweets into various themes including discrimination, advocacy, and self-identification. Using these annotations, we benchmark the corpus leveraging state-of-the-art classifiers. Finally through a mixed-methods analysis on our annotated corpus, we reveal stark differences in self-expression between male and female disabled users on Indian Twitter. | null | null | 10.18653/v1/2022.findings-aacl.35 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,502 |
inproceedings | mukherjee-etal-2022-topic | Topic-aware Multimodal Summarization | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.36/ | Mukherjee, Sourajit and Jangra, Anubhav and Saha, Sriparna and Jatowt, Adam | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 387--398 | Multimodal Summarization (MS) has attracted research interest in the past few years due to the ease with which users perceive multimodal summaries. It is important for MS models to consider the topic a given target content belongs to. In the current paper, we propose a topic-aware MS system which performs two tasks simultaneously: differentiating the images into {\textquotedblleft}on-topic{\textquotedblright} and {\textquotedblleft}off-topic{\textquotedblright} categories and further utilizing the {\textquotedblleft}on-topic{\textquotedblright} images to generate multimodal summaries. The hypothesis is that, the proposed topic similarity classifier will help in generating better multimodal summary by focusing on important components of images and text which are specific to a particular topic. To develop the topic similarity classifier, we have augmented the existing popular MS data set, MSMO, with similar {\textquotedblleft}on-topic{\textquotedblright} and dissimilar {\textquotedblleft}off-topic{\textquotedblright} images for each sample. Our experimental results establish that the focus on {\textquotedblleft}on-topic{\textquotedblright} features helps in generating topic-aware multimodal summaries, which outperforms the state of the art approach by 1.7 {\%} in ROUGE-L metric. | null | null | 10.18653/v1/2022.findings-aacl.36 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,503 |
inproceedings | kar-etal-2022-arggen | {A}rg{G}en: Prompting Text Generation Models for Document-Level Event-Argument Aggregation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.37/ | Kar, Debanjana and Sarkar, Sudeshna and Goyal, Pawan | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 399--404 | Most of the existing discourse-level Information Extraction tasks have been modeled to be extractive in nature. However, we argue that extracting information from larger bodies of discourse-like documents requires more natural language understanding and reasoning capabilities. In our work, we propose the novel task of document-level event argument aggregation which generates consolidated event-arguments at a document-level with minimal loss of information. More specifically, we focus on generating precise document-level information frames in a multilingual setting using prompt-based methods. In this paper, we show the effectiveness of u prompt-based text generation approach to generate document-level argument spans in a low-resource and zero-shot setting. We also release the first of its kind multilingual event argument aggregation dataset that can be leveraged in other related multilingual text generation tasks as well: \url{https://github.com/DebanjanaKar/ArgGen}. | null | null | 10.18653/v1/2022.findings-aacl.37 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,504 |
inproceedings | kawasaki-etal-2022-hierarchical | Hierarchical Processing of Visual and Language Information in the Brain | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.38/ | Kawasaki, Haruka and Nishida, Satoshi and Kobayashi, Ichiro | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 405--410 | In recent years, many studies using deep learning have been conducted to elucidate the mechanism of information representation in the brain under stimuli evoked by various modalities. On the other hand, it has not yet been clarified how we humans link information of different modalities in the brain. In this study, to elucidate the relationship between visual and language information in the brain, we constructed encoding models that predict brain activity based on features extracted from the hidden layers of VGG16 for visual information and BERT for language information. We investigated the hierarchical characteristics of cortical localization and representational content of visual and semantic information in the cortex based on the brain activity predicted by the encoding model. The results showed that the cortical localization modeled by VGG16 is getting close to that of BERT as VGG16 moves to higher layers, while the representational contents differ significantly between the two modalities. | null | null | 10.18653/v1/2022.findings-aacl.38 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,505 |
inproceedings | palomino-etal-2022-differential | Differential Bias: On the Perceptibility of Stance Imbalance in Argumentation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.39/ | Palomino, Alonso and Al Khatib, Khalid and Potthast, Martin and Stein, Benno | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 411--421 | Most research on natural language processing treats bias as an absolute concept: Based on a (probably complex) algorithmic analysis, a sentence, an article, or a text is classified as biased or not. Given the fact that for humans the question of whether a text is biased can be difficult to answer or is answered contradictory, we ask whether an {\textquotedblleft}absolute bias classification{\textquotedblright} is a promising goal at all. We see the problem not in the complexity of interpreting language phenomena but in the diversity of sociocultural backgrounds of the readers, which cannot be handled uniformly: To decide whether a text has crossed the proverbial line between non-biased and biased is subjective. By asking {\textquotedblleft}Is text X more [less, equally] biased than text Y?{\textquotedblright} we propose to analyze a simpler problem, which, by its construction, is rather independent of standpoints, views, or sociocultural aspects. In such a model, bias becomes a preference relation that induces a partial ordering from least biased to most biased texts without requiring a decision on where to draw the line. A prerequisite for this kind of bias model is the ability of humans to perceive relative bias differences in the first place. In our research, we selected a specific type of bias in argumentation, the stance bias, and designed a crowdsourcing study showing that differences in stance bias are perceptible when (light) support is provided through training or visual aid. | null | null | 10.18653/v1/2022.findings-aacl.39 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,506 |
inproceedings | landsman-etal-2022-beamr | {B}eam{R}: Beam Reweighing with Attribute Discriminators for Controllable Text Generation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.40/ | Landsman, David and Chen, Jerry Zikun and Zaidi, Hussain | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 422--437 | Recent advances in natural language processing have led to the availability of large pre-trained language models (LMs), with rich generative capabilities. Although these models are able to produce fluent and coherent text, it remains a challenge to control various attributes of the generation, including sentiment, formality, topic and many others. We propose a Beam Reweighing (BeamR) method, building on top of standard beam search, in order to control different attributes. BeamR combines any generative LM with any attribute discriminator, offering full flexibility of generation style and attribute, while the beam search backbone maintains fluency across different domains. Notably, BeamR allows practitioners to leverage pre-trained models without the need to train generative LMs together with discriminators. We evaluate BeamR in two diverse tasks: sentiment steering, and machine translation formality. Our results show that BeamR performs on par with or better than existing state-of-the-art approaches (including fine-tuned methods), and highlight the flexiblity of BeamR in both causal and seq2seq language modeling tasks. | null | null | 10.18653/v1/2022.findings-aacl.40 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,507 |
inproceedings | xu-etal-2022-r | {R}{\&}{R}: Metric-guided Adversarial Sentence Generation | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.41/ | Xu, Lei and Cuesta-Infante, Alfredo and Berti-Equille, Laure and Veeramachaneni, Kalyan | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 438--452 | Adversarial examples are helpful for analyzing and improving the robustness of text classifiers. Generating high-quality adversarial examples is a challenging task as it requires generating fluent adversarial sentences that are semantically similar to the original sentences and preserve the original labels, while causing the classifier to misclassify them. Existing methods prioritize misclassification by maximizing each perturbation`s effectiveness at misleading a text classifier; thus, the generated adversarial examples fall short in terms of fluency and similarity. In this paper, we propose a rewrite and rollback (R{\&}R) framework for adversarial attack. It improves the quality of adversarial examples by optimizing a critique score which combines the fluency, similarity, and misclassification metrics. R{\&}R generates high-quality adversarial examples by allowing exploration of perturbations that do not have immediate impact on the misclassification metric but can improve fluency and similarity metrics. We evaluate our method on 5 representative datasets and 3 classifier architectures. Our method outperforms current state-of-the-art in attack success rate by +16.2{\%}, +12.8{\%}, and +14.0{\%} on the classifiers respectively. Code is available at \url{https://github.com/DAI-Lab/fibber} | null | null | 10.18653/v1/2022.findings-aacl.41 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,508 |
inproceedings | wang-etal-2022-simple-yet | A Simple yet Effective Learnable Positional Encoding Method for Improving Document Transformer Model | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.42/ | Wang, Guoxin and Lu, Yijuan and Cui, Lei and Lv, Tengchao and Florencio, Dinei and Zhang, Cha | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 453--463 | Positional encoding plays a key role in Transformer-based architecture, which is to indicate and embed token sequential order information. Understanding documents with unreliable reading order information is a real challenge for document Transformer models. This paper proposes a simple and effective positional encoding method, learnable sinusoidal positional encoding (LSPE), by building a learnable sinusoidal positional encoding feed-forward network. We apply LSPE to document Transformer models and pretrain them on document datasets. Then we finetune and evaluate the model performance on document understanding tasks in form, receipt, and invoice domains. Experimental results show our proposed method not only outperforms other baselines, but also demonstrates its robustness and stability on handling noisy data with incorrect order information. | null | null | 10.18653/v1/2022.findings-aacl.42 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,509 |
inproceedings | gupta-etal-2022-mmm | {MMM}: An Emotion and Novelty-aware Approach for Multilingual Multimodal Misinformation Detection | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.43/ | Gupta, Vipin and Kumari, Rina and Ashok, Nischal and Ghosal, Tirthankar and Ekbal, Asif | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 464--477 | The growth of multilingual web content in low-resource languages is becoming an emerging challenge to detect misinformation. One particular hindrance to research on this problem is the non-availability of resources and tools. Majority of the earlier works in misinformation detection are based on English content which confines the applicability of the research to a specific language only. Increasing presence of multimedia content on the web has promoted misinformation in which real multimedia content (images, videos) are used in different but related contexts with manipulated texts to mislead the readers. Detecting this category of misleading information is almost impossible without any prior knowledge. Studies say that emotion-invoking and highly novel content accelerates the dissemination of false information. To counter this problem, here in this paper, we first introduce a novel multilingual multimodal misinformation dataset that includes background knowledge (from authentic sources) of the misleading articles. Second, we propose an effective neural model leveraging novelty detection and emotion recognition to detect fabricated information. We perform extensive experiments to justify that our proposed model outperforms the state-of-the-art (SOTA) on the concerned task. | null | null | 10.18653/v1/2022.findings-aacl.43 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,510 |
inproceedings | -ekbal-2022-adversarial | Adversarial Sample Generation for Aspect based Sentiment Classification | He, Yulan and Ji, Heng and Li, Sujian and Liu, Yang and Chang, Chua-Hui | nov | 2022 | Online only | Association for Computational Linguistics | https://aclanthology.org/2022.findings-aacl.44/ | Mamta and Ekbal, Asif | Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022 | 478--492 | Deep learning models have been proven vulnerable towards small imperceptible perturbed input, known as adversarial samples, which are indiscernible by humans. Initial attacks in Natural Language Processing perturb characters or words in sentences using heuristics and synonyms-based strategies, resulting in grammatical incorrect or out-of-context sentences. Recent works attempt to generate contextual adversarial samples using a masked language model, capturing word relevance using leave-one-out (LOO). However, they lack the design to maintain the semantic coherency for aspect based sentiment analysis (ABSA) tasks. Moreover, they focused on resource-rich languages like English. We present an attack algorithm for the ABSA task by exploiting model explainability techniques to address these limitations. It does not require access to the training data, raw access to the model, or calibrating a new model. Our proposed method generates adversarial samples for a given aspect, maintaining more semantic coherency. In addition, it can be generalized to low-resource languages, which are at high risk due to resource scarcity. We show the effectiveness of the proposed attack using automatic and human evaluation. Our method outperforms the state-of-art methods in perturbation ratio, success rate, and semantic coherence. | null | null | 10.18653/v1/2022.findings-aacl.44 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,511 |
inproceedings | yang-etal-2022-logicsolver | {L}ogic{S}olver: Towards Interpretable Math Word Problem Solving with Logical Prompt-enhanced Learning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.1/ | Yang, Zhicheng and Qin, Jinghui and Chen, Jiaqi and Lin, Liang and Liang, Xiaodan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 1--13 | Recently, deep learning models have made great progress in MWP solving on answer accuracy. However, they are uninterpretable since they mainly rely on shallow heuristics to achieve high performance without understanding and reasoning the grounded math logic. To address this issue and make a step towards interpretable MWP solving, we first construct a high-quality MWP dataset named InterMWP which consists of 11,495 MWPs and annotates interpretable logical formulas based on algebraic knowledge as the grounded linguistic logic of each solution equation. Different from existing MWP datasets, our InterMWP benchmark asks for a solver to not only output the solution expressions but also predict the corresponding logical formulas. We further propose a novel approach with logical prompt and interpretation generation, called LogicSolver. For each MWP, our LogicSolver first retrieves some highly-correlated algebraic knowledge and then passes them to the backbone model as prompts to improve the semantic representations of MWPs. With these improved semantic representations, our LogicSolver generates corresponding solution expressions and interpretable knowledge formulas in accord with the generated solution expressions, simultaneously. Experimental results show that our LogicSolver has stronger logical formula-based interpretability than baselines while achieving higher answer accuracy with the help of logical prompts, simultaneously. The source code and dataset will be available at https://github.com/yangzhch6/InterMWP. | null | null | 10.18653/v1/2022.findings-emnlp.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,513 |
inproceedings | qu-etal-2022-commonsense | Commonsense Knowledge Salience Evaluation with a Benchmark Dataset in {E}-commerce | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.2/ | Qu, Yincen and Zhang, Ningyu and Chen, Hui and Dai, Zelin and Wang, Chengming and Wang, Xiaoyu and Chen, Qiang and Chen, Huajun | Findings of the Association for Computational Linguistics: EMNLP 2022 | 14--27 | In e-commerce, the salience of commonsense knowledge (CSK) is beneficial for widespread applications such as product search and recommendation. For example, when users search for {\textquotedblleft}running{\textquotedblright} in e-commerce, they would like to find products highly related to running, such as {\textquotedblleft}running shoes{\textquotedblright} rather than {\textquotedblleft}shoes{\textquotedblright}. Nevertheless, many existing CSK collections rank statements solely by confidence scores, and there is no information about which ones are salient from a human perspective. In this work, we define the task of supervised salience evaluation, where given a CSK triple, the model is required to learn whether the triple is salient or not. In addition to formulating the new task, we also release a new Benchmark dataset of Salience Evaluation in E-commerce (BSEE) and hope to promote related research on commonsense knowledge salience evaluation. We conduct experiments in the dataset with several representative baseline models. The experimental results show that salience evaluation is a hard task where models perform poorly on our evaluation set. We further propose a simple but effective approach, PMI-tuning, which shows promise for solving this novel problem. Code is available in https://github.com/OpenBGBenchmark/OpenBG-CSK. | null | null | 10.18653/v1/2022.findings-emnlp.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,514 |
inproceedings | pryzant-etal-2022-automatic | Automatic Rule Induction for Efficient Semi-Supervised Learning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.3/ | Pryzant, Reid and Yang, Ziyi and Xu, Yichong and Zhu, Chenguang and Zeng, Michael | Findings of the Association for Computational Linguistics: EMNLP 2022 | 28--44 | Semi-supervised learning has shown promise in allowing NLP models to generalize from small amounts of labeled data. Meanwhile, pretrained transformer models act as black-box correlation engines that are difficult to explain and sometimes behave unreliably. In this paper, we propose tackling both of these challenges via Automatic Rule Induction (ARI), a simple and general-purpose framework for the automatic discovery and integration of symbolic rules into pretrained transformer models. First, we extract weak symbolic rules from low-capacity machine learning models trained on small amounts of labeled data. Next, we use an attention mechanism to integrate these rules into high-capacity pretrained transformer models. Last, the rule-augmented system becomes part of a self-training framework to boost supervision signal on unlabeled data. These steps can be layered beneath a variety of existing weak supervision and semi-supervised NLP algorithms in order to improve performance and interpretability. Experiments across nine sequence classification and relation extraction tasks suggest that ARI can improve state-of-the-art methods with no manual effort and minimal computational overhead. | null | null | 10.18653/v1/2022.findings-emnlp.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,515 |
inproceedings | song-etal-2022-improving-semantic | Improving Semantic Matching through Dependency-Enhanced Pre-trained Model with Adaptive Fusion | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.4/ | Song, Jian and Liang, Di and Li, Rumei and Li, Yuntao and Wang, Sirui and Peng, Minlong and Wu, Wei and Yu, Yongxin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 45--57 | Transformer-based pre-trained models like BERT have achieved great progress on Semantic Sentence Matching. Meanwhile, dependency prior knowledge has also shown general benefits in multiple NLP tasks. However, how to efficiently integrate dependency prior structure into pre-trained models to better model complex semantic matching relations is still unsettled. In this paper, we propose the Dependency-Enhanced Adaptive Fusion Attention (DAFA), which explicitly introduces dependency structure into pre-trained models and adaptively fuses it with semantic information. Specifically, (i) DAFA first proposes a structure-sensitive paradigm to construct a dependency matrix for calibrating attention weights. (ii) It adopts an adaptive fusion module to integrate the obtained dependency information and the original semantic signals. Moreover, DAFA reconstructs the attention calculation flow and provides better interpretability. By applying it on BERT, our method achieves state-of-the-art or competitive performance on 10 public datasets, demonstrating the benefits of adaptively fusing dependency structure in semantic matching task. | null | null | 10.18653/v1/2022.findings-emnlp.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,516 |
inproceedings | lee-thorp-ainslie-2022-sparse | Sparse Mixers: Combining {M}o{E} and Mixing to build a more efficient {BERT} | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.5/ | Lee-Thorp, James and Ainslie, Joshua | Findings of the Association for Computational Linguistics: EMNLP 2022 | 58--75 | We combine the capacity of sparsely gated Mixture-of-Experts (MoE) with the speed and stability of linear, mixing transformations to design the Sparse Mixer encoder model. Sparse Mixer slightly outperforms BERT on GLUE and SuperGLUE, but more importantly trains 65{\%} faster and runs inference 61{\%} faster. We also present a faster variant, prosaically named Fast Sparse Mixer, that marginally underperforms BERT on SuperGLUE, but trains and runs nearly twice as fast. We justify the design of these two models by carefully ablating through various mixing mechanisms, MoE configurations, and hyperparameters. Sparse Mixer overcomes many of the latency and stability concerns of MoE models and offers the prospect of serving sparse student models, without resorting to distilling them to dense variants. | null | null | 10.18653/v1/2022.findings-emnlp.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,517 |
inproceedings | zhang-li-2022-ke | {KE}-{GCL}: Knowledge Enhanced Graph Contrastive Learning for Commonsense Question Answering | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.6/ | Zhang, Lihui and Li, Ruifan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 76--87 | Commonsense question answering (CQA) aims to choose the correct answers for commonsense questions. Most existing works focus on extracting and reasoning over external knowledge graphs (KG). However, the noise in KG prevents these models from learning effective representations. In this paper, we propose a Knowledge Enhanced Graph Contrastive Learning model (KE-GCL) by incorporating the contextual descriptions of entities and adopting a graph contrastive learning scheme. Specifically, for QA pairs we represent the knowledge from KG and contextual descriptions. Then, the representations of contextual descriptions as context nodes are inserted into KG, forming the knowledge-enhanced graphs.Moreover, we design a contrastive learning method on graphs. For knowledge-enhanced graphs, we build their augmented views with an adaptive sampling strategy. After that, we reason over graphs to update their representations by scattering edges and aggregating nodes. To further improve GCL, hard graph negatives are chosen based on incorrect answers. Extensive experiments on two benchmark datasets demonstrate the effectiveness of our proposed KE-GCL, which outperforms previous methods consistently. | null | null | 10.18653/v1/2022.findings-emnlp.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,518 |
inproceedings | cherniavskii-etal-2022-acceptability | Acceptability Judgements via Examining the Topology of Attention Maps | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.7/ | Cherniavskii, Daniil and Tulchinskii, Eduard and Mikhailov, Vladislav and Proskurina, Irina and Kushnareva, Laida and Artemova, Ekaterina and Barannikov, Serguei and Piontkovskaya, Irina and Piontkovski, Dmitri and Burnaev, Evgeny | Findings of the Association for Computational Linguistics: EMNLP 2022 | 88--107 | The role of the attention mechanism in encoding linguistic knowledge has received special interest in NLP. However, the ability of the attention heads to judge the grammatical acceptability of a sentence has been underexplored. This paper approaches the paradigm of acceptability judgments with topological data analysis (TDA), showing that the geometric properties of the attention graph can be efficiently exploited for two standard practices in linguistics: binary judgments and linguistic minimal pairs. Topological features enhance the BERT-based acceptability classifier scores by 8{\%}-24{\%} on CoLA in three languages (English, Italian, and Swedish). By revealing the topological discrepancy between attention maps of minimal pairs, we achieve the human-level performance on the BLiMP benchmark, outperforming nine statistical and Transformer LM baselines. At the same time, TDA provides the foundation for analyzing the linguistic functions of attention heads and interpreting the correspondence between the graph features and grammatical phenomena. We publicly release the code and other materials used in the experiments. | null | null | 10.18653/v1/2022.findings-emnlp.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,519 |
inproceedings | chai-etal-2022-clip | Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of Rewards | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.8/ | Chai, Yekun and Wang, Shuohuan and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 108--117 | Derivative-free prompt learning has emerged as a lightweight alternative to prompt tuning, which only requires model inference to optimize the prompts. However, existing work did not take full advantage of the over-parameterized characteristics of large pre-trained language models (PLMs). In this paper, we propose Clip-Tuning, a simple yet effective method that adopts diverse frozen {\textquotedblleft}thinned{\textquotedblright} networks of PLMs to obtain *a mixture of rewards* and thus advance the derivative-free prompt learning. The thinned networks consist of all the hidden units that survive a stationary dropout strategy, whose inference predictions reflect an ensemble of partial views over prompted training samples. Our method outperforms previous gradient-free prompt learning methods and achieves parity with gradient-based counterparts on seven language understanding benchmarks under few-shot settings. | null | null | 10.18653/v1/2022.findings-emnlp.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,520 |
inproceedings | li-etal-2022-soft | Soft-Labeled Contrastive Pre-Training for Function-Level Code Representation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.9/ | Li, Xiaonan and Guo, Daya and Gong, Yeyun and Lin, Yun and Shen, Yelong and Qiu, Xipeng and Jiang, Daxin and Chen, Weizhu and Duan, Nan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 118--129 | Code contrastive pre-training has recently achieved significant progress on code-related tasks. In this paper, we present \textbf{SCodeR}, a \textbf{S}oft-labeled contrastive pre-training framework with two positive sample construction methods to learn functional-level \textbf{Code} \textbf{R}epresentation. Considering the relevance between codes in a large-scale code corpus, the soft-labeled contrastive pre-training can obtain fine-grained soft-labels through an iterative adversarial manner and use them to learn better code representation. The positive sample construction is another key for contrastive pre-training. Previous works use transformation-based methods like variable renaming to generate semantically equal positive codes. However, they usually result in the generated code with a highly similar surface form, and thus mislead the model to focus on superficial code structure instead of code semantics. To encourage SCodeR to capture semantic information from the code, we utilize code comments and abstract syntax sub-trees of the code to build positive samples. We conduct experiments on four code-related tasks over seven datasets. Extensive experimental results show that SCodeR achieves new state-of-the-art performance on all of them, which illustrates the effectiveness of the proposed pre-training method. | null | null | 10.18653/v1/2022.findings-emnlp.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,521 |
inproceedings | luo-etal-2022-conditioned | Conditioned Masked Language and Image Modeling for Image-Text Dense Retrieval | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.10/ | Luo, Ziyang and Xi, Yadong and Zhang, Rongsheng and Li, GongZheng and Zhao, Zeng and Ma, Jing | Findings of the Association for Computational Linguistics: EMNLP 2022 | 130--140 | Image-text retrieval is a fundamental cross-modal task that takes image/text as a query to retrieve relevant data of another type. The large-scale two-stream pre-trained models like CLIP have achieved tremendous success in this area. They embed the images and texts into instance representations with two separate encoders, aligning them on the instance-level with contrastive learning. Beyond this, the following works adopt the fine-grained token-level interaction (Masked Language and Image Modeling) to boost performance further. However, the vanilla token-level objectives are not designed to aggregate the image-text alignment information into the instance representations, but the token representations, causing a gap between pre-training and application. To address this issue, we carefully design two novel conditioned token-level pre-training objectives, Conditioned Masked Language and Image Modeling (ConMLM and ConMIM), forcing models to aggregate the token-level alignment information into the instance representations. Combing with the instance-level contrastive learning, we propose our cross-modal dense retrieval framework, Conditioned Language-Image Pre-training (ConLIP). Experimental results on two popular cross-modal retrieval benchmarks (MSCOCO and Flickr30k) reveal the effectiveness of our methods. | null | null | 10.18653/v1/2022.findings-emnlp.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,522 |
inproceedings | papi-etal-2022-simultaneous | Does Simultaneous Speech Translation need Simultaneous Models? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.11/ | Papi, Sara and Gaido, Marco and Negri, Matteo and Turchi, Marco | Findings of the Association for Computational Linguistics: EMNLP 2022 | 141--153 | In simultaneous speech translation (SimulST), finding the best trade-off between high output quality and low latency is a challenging task. To meet the latency constraints posed by different application scenarios, multiple dedicated SimulST models are usually trained and maintained, generating high computational costs. In this paper, also motivated by the increased sensitivity towards sustainable AI, we investigate whether a single model trained offline can serve both offline and simultaneous applications under different latency regimes without additional training or adaptation. Experiments on en-{\ensuremath{>}}de, es show that, aside from facilitating the adoption of well-established offline architectures and training strategies without affecting latency, offline training achieves similar or better quality compared to the standard SimulST training protocol, also being competitive with the state-of-the-art system. | null | null | 10.18653/v1/2022.findings-emnlp.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,523 |
inproceedings | dinh-etal-2022-utilizing | Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.12/ | Dinh, Tuan and Sohn, Jy-yong and Rajput, Shashank and Ossowski, Timothy and Ming, Yifei and Hu, Junjie and Papailiopoulos, Dimitris and Lee, Kangwook | Findings of the Association for Computational Linguistics: EMNLP 2022 | 154--168 | Word translation without parallel corpora has become feasible, rivaling the performance of supervised methods. Recent findings have shown the improvement in accuracy and robustness of unsupervised word translation (UWT) by utilizing visual observations, which are universal representations across languages.Our work investigates the potential of using not only visual observations but also pretrained language-image models for enabling a more efficient and robust UWT. We develop a novel UWT method dubbed Word Alignment using Language-Image Pretraining (WALIP), leveraging visual observations via the shared image-text embedding space of CLIPs (Radford et al., 2021). WALIP has a two-step procedure. First, we retrieve word pairs with high confidences of similarity, computed using our proposed image-based fingerprints, which define the initial pivot for the alignment.Second, we apply our robust Procrustes algorithm to estimate the linear mapping between two embedding spaces, which iteratively corrects and refines the estimated alignment.Our extensive experiments show that WALIP improves upon the state-of-the-art performance of bilingual word alignment for a few language pairs across different word embeddings and displays great robustness to the dissimilarity of language pairs or training corpora for two word embeddings. | null | null | 10.18653/v1/2022.findings-emnlp.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,524 |
inproceedings | ju-etal-2022-grape | Grape: Knowledge Graph Enhanced Passage Reader for Open-domain Question Answering | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.13/ | Ju, Mingxuan and Yu, Wenhao and Zhao, Tong and Zhang, Chuxu and Ye, Yanfang | Findings of the Association for Computational Linguistics: EMNLP 2022 | 169--181 | A common thread of open-domain question answering (QA) models employs a retriever-reader pipeline that first retrieves a handful of relevant passages from Wikipedia and then peruses the passages to produce an answer. However, even state-of-the-art readers fail to capture the complex relationships between entities appearing in questions and retrieved passages, leading to answers that contradict the facts. In light of this, we propose a novel knowledge graph enhanced passage reader, namely Grape, to improve the reader performance for open-domain QA. Specifically, for each pair of question and retrieved passage, we first construct a localized bipartite graph, attributed to entity embeddings extracted from the intermediate layer of the reader model. Then, a graph neural network learns relational knowledge while fusing graph and contextual representations into the hidden states of the reader model. Experiments on three open-domain QA benchmarks show Grape can improve the state-of-the-art performance by up to 2.2 exact match score with a negligible overhead increase, with the same retriever and retrieved passages. Our code is publicly available at https://github.com/jumxglhf/GRAPE. | null | null | 10.18653/v1/2022.findings-emnlp.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,525 |
inproceedings | zhao-etal-2022-narrasum | {N}arra{S}um: A Large-Scale Dataset for Abstractive Narrative Summarization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.14/ | Zhao, Chao and Brahman, Faeze and Song, Kaiqiang and Yao, Wenlin and Yu, Dian and Chaturvedi, Snigdha | Findings of the Association for Computational Linguistics: EMNLP 2022 | 182--197 | Narrative summarization aims to produce a distilled version of a narrative to describe its most salient events and characters. Writing a summary for a narrative is challenging as it requires an understanding of event causality and character behaviors. To encourage research in this direction, we propose NarraSum, a large-scale narrative summarization dataset. It contains 122K narratives, which are collected from the synopses of movies and TV episodes with diverse genres, and their corresponding abstractive summaries. Experiments show that there is a large performance gap between humans and the state-of-the-art summarization models on NarraSum. We hope that this dataset will promote future research in summarization, as well as broader studies of natural language understanding and generation. The dataset is available at https://github.com/zhaochaocs/narrasum. | null | null | 10.18653/v1/2022.findings-emnlp.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,526 |
inproceedings | vamvas-sennrich-2022-nmtscore | {NMTS}core: A Multilingual Analysis of Translation-based Text Similarity Measures | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.15/ | Vamvas, Jannis and Sennrich, Rico | Findings of the Association for Computational Linguistics: EMNLP 2022 | 198--213 | Being able to rank the similarity of short text segments is an interesting bonus feature of neural machine translation. Translation-based similarity measures include direct and pivot translation probability, as well as translation cross-likelihood, which has not been studied so far. We analyze these measures in the common framework of multilingual NMT, releasing the NMTScore library. Compared to baselines such as sentence embeddings, translation-based measures prove competitive in paraphrase identification and are more robust against adversarial or multilingual input, especially if proper normalization is applied. When used for reference-based evaluation of data-to-text generation in 2 tasks and 17 languages, translation-based measures show a relatively high correlation to human judgments. | null | null | 10.18653/v1/2022.findings-emnlp.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,527 |
inproceedings | moore-2022-language | Language Models Understand Us, Poorly | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.16/ | Moore, Jared | Findings of the Association for Computational Linguistics: EMNLP 2022 | 214--222 | Some claim language models understand us. Others won`t hear it. To clarify, I investigate three views of human language understanding: as-mapping, as-reliability and as-representation. I argue that while behavioral reliability is necessary for understanding, internal representations are sufficient; they climb the right hill. I review state-of-the-art language and multi-modal models: they are pragmatically challenged by under-specification of form. I question the Scaling Paradigm: limits on resources may prohibit scaled-up models from approaching understanding. Last, I describe how as-representation advances a science of understanding. We need work which probes model internals, adds more of human language, and measures what models can learn. | null | null | 10.18653/v1/2022.findings-emnlp.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,528 |
inproceedings | hu-etal-2022-dialogue | Dialogue Meaning Representation for Task-Oriented Dialogue Systems | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.17/ | Hu, Xiangkun and Dai, Junqi and Yan, Hang and Zhang, Yi and Guo, Qipeng and Qiu, Xipeng and Zhang, Zheng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 223--237 | Dialogue meaning representation formulates natural language utterance semantics in their conversational context in an explicit and machine-readable form. Previous work typically follows the intent-slot framework, which is easy for annotation yet limited in scalability for complex linguistic expressions. A line of works alleviates the representation issue by introducing hierarchical structures but challenging to express complex compositional semantics, such as negation and coreference. We propose Dialogue Meaning Representation (DMR), a pliable and easily extendable representation for task-oriented dialogue. Our representation contains a set of nodes and edges to represent rich compositional semantics. Moreover, we propose an inheritance hierarchy mechanism focusing on domain extensibility. Additionally, we annotated DMR-FastFood, a multi-turn dialogue dataset with more than 70k utterances, with DMR. We propose two evaluation tasks to evaluate different dialogue models and a novel coreference resolution model GNNCoref for the graph-based coreference resolution task. Experiments show that DMR can be parsed well with pre-trained Seq2Seq models, and GNNCoref outperforms the baseline models by a large margin.The dataset and code are available at https://github.com/amazon-research/dialogue-meaning-representation | null | null | 10.18653/v1/2022.findings-emnlp.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,529 |
inproceedings | li-etal-2022-learning-dictionary | Learning from the Dictionary: Heterogeneous Knowledge Guided Fine-tuning for {C}hinese Spell Checking | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.18/ | Li, Yinghui and Ma, Shirong and Zhou, Qingyu and Li, Zhongli and Yangning, Li and Huang, Shulin and Liu, Ruiyang and Li, Chao and Cao, Yunbo and Zheng, Haitao | Findings of the Association for Computational Linguistics: EMNLP 2022 | 238--249 | Chinese Spell Checking (CSC) aims to detect and correct Chinese spelling errors. Recent researches start from the pretrained knowledge of language models and take multimodal information into CSC models to improve the performance. However, they overlook the rich knowledge in the dictionary, the reference book where one can learn how one character should be pronounced, written, and used. In this paper, we propose the LEAD framework, which renders the CSC model to learn heterogeneous knowledge from the dictionary in terms of phonetics, vision, and meaning. LEAD first constructs positive and negative samples according to the knowledge of character phonetics, glyphs, and definitions in the dictionary. Then a unified contrastive learning-based training scheme is employed to refine the representations of the CSC models. Extensive experiments and detailed analyses on the SIGHAN benchmark datasets demonstrate the effectiveness of our proposed methods. | null | null | 10.18653/v1/2022.findings-emnlp.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,530 |
inproceedings | chen-etal-2022-salient | Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.19/ | Chen, Xilun and Lakhotia, Kushal and Oguz, Barlas and Gupta, Anchit and Lewis, Patrick and Peshterliev, Stan and Mehdad, Yashar and Gupta, Sonal and Yih, Wen-tau | Findings of the Association for Computational Linguistics: EMNLP 2022 | 250--262 | Despite their recent popularity and well-known advantages, dense retrievers still lag behind sparse methods such as BM25 in their ability to reliably match salient phrases and rare entities in the query and to generalize to out-of-domain data. It has been argued that this is an inherent limitation of dense models. We rebut this claim by introducing the Salient Phrase Aware Retriever (SPAR), a dense retriever with the lexical matching capacity of a sparse model. We show that a dense Lexical Model {\ensuremath{\Lambda}} can be trained to imitate a sparse one, and SPAR is built by augmenting a standard dense retriever with {\ensuremath{\Lambda}}. Empirically, SPAR shows superior performance on a range of tasks including five question answering datasets, MS MARCO passage retrieval, as well as the EntityQuestions and BEIR benchmarks for out-of-domain evaluation, exceeding the performance of state-of-the-art dense and sparse retrievers. The code and models of SPAR are available at: https://github.com/facebookresearch/dpr-scale/tree/main/spar | null | null | 10.18653/v1/2022.findings-emnlp.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,531 |
inproceedings | wang-etal-2022-smartave | {SMARTAVE}: Structured Multimodal Transformer for Product Attribute Value Extraction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.20/ | Wang, Qifan and Yang, Li and Wang, Jingang and Krishnan, Jitin and Dai, Bo and Wang, Sinong and Xu, Zenglin and Khabsa, Madian and Ma, Hao | Findings of the Association for Computational Linguistics: EMNLP 2022 | 263--276 | Automatic product attribute value extraction refers to the task of identifying values of an attribute from the product information. Product attributes are essential in improving online shopping experience for customers. Most existing methods focus on extracting attribute values from product title and description.However, in many real-world applications, a product is usually represented by multiple modalities beyond title and description, such as product specifications, text and visual information from the product image, etc. In this paper, we propose SMARTAVE, a Structure Mltimodal trAnsformeR for producT Attribute Value Extraction, which jointly encodes the structured product information from multiple modalities. Specifically, in SMARTAVE encoder, we introduce hyper-tokens to represent the modality-level information, and local-tokens to represent the original text and visual inputs. Structured attention patterns are designed among the hyper-tokens and local-tokens for learning effective product representation. The attribute values are then extracted based on the learned embeddings. We conduct extensive experiments on two multimodal product datasets. Experimental results demonstrate the superior performance of the proposed approach over several state-of-the-art methods. Ablation studies validate the effectiveness of the structured attentions in modeling the multimodal product information. | null | null | 10.18653/v1/2022.findings-emnlp.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,532 |
inproceedings | zan-etal-2022-language | When Language Model Meets Private Library | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.21/ | Zan, Daoguang and Chen, Bei and Lin, Zeqi and Guan, Bei and Yongji, Wang and Lou, Jian-Guang | Findings of the Association for Computational Linguistics: EMNLP 2022 | 277--288 | With the rapid development of pre-training techniques, a number of language models have been pre-trained on large-scale code corpora and perform well in code generation. In this paper, we investigate how to equip pre-trained language models with the ability of code generation for private libraries. In practice, it is common for programmers to write code using private libraries. However, this is a challenge for language models since they have never seen private APIs during training. Motivated by the fact that private libraries usually come with elaborate API documentation, we propose a novel framework with two modules: the APIRetriever finds useful APIs, and then the APICoder generates code using these APIs. For APIRetriever, we present a dense retrieval system and also design a friendly interaction to involve uses. For APICoder, we can directly use off-the-shelf language models, or continually pre-train the base model on a code corpus containing API information. Both modules are trained with data from public libraries and can be generalized to private ones. Furthermore, we craft three benchmarks for private libraries, named TorchDataEval, MonkeyEval, and BeatNumEval. Experimental results demonstrate the impressive performance of our framework. | null | null | 10.18653/v1/2022.findings-emnlp.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,533 |
inproceedings | li-etal-2022-cross-domain | Cross-Domain Sentiment Classification using Semantic Representation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.22/ | Li, Shichen and Wang, Zhongqing and Jiang, Xiaotong and Zhou, Guodong | Findings of the Association for Computational Linguistics: EMNLP 2022 | 289--299 | Previous studies on cross-domain sentiment classification depend on the pivot features or utilize the target data for representation learning, which ignore the semantic relevance between different domains. To this end, we exploit Abstract Meaning Representation (AMR) to help with cross-domain sentiment classification. Compared with the textual input, AMR reduces data sparsity and explicitly provides core semantic knowledge and correlations between different domains. In particular, we develop an algorithm to construct a sentiment-driven semantic graph from sentence-level AMRs. We further design two strategies to linearize the semantic graph and propose a text-graph interaction model to fuse the text and semantic graph representations for cross-domain sentiment classification. Empirical studies show the effectiveness of our proposed model over several strong baselines. The results also indicate the importance of the proposed sentiment-driven semantic graph for cross-domain sentiment classification. | null | null | 10.18653/v1/2022.findings-emnlp.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,534 |
inproceedings | dycke-etal-2022-yes | Yes-Yes-Yes: Proactive Data Collection for {ACL} Rolling Review and Beyond | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.23/ | Dycke, Nils and Kuznetsov, Ilia and Gurevych, Iryna | Findings of the Association for Computational Linguistics: EMNLP 2022 | 300--318 | The shift towards publicly available text sources has enabled language processing at unprecedented scale, yet leaves under-serviced the domains where public and openly licensed data is scarce. Proactively collecting text data for research is a viable strategy to address this scarcity, but lacks systematic methodology taking into account the many ethical, legal and confidentiality-related aspects of data collection. Our work presents a case study on proactive data collection in peer review {--} a challenging and under-resourced NLP domain. We outline ethical and legal desiderata for proactive data collection and introduce {\textquotedblleft}Yes-Yes-Yes{\textquotedblright}, the first donation-based peer reviewing data collection workflow that meets these requirements. We report on the implementation of Yes-Yes-Yes at ACL Rolling Review and empirically study the implications of proactive data collection for the dataset size and the biases induced by the donation behavior on the peer reviewing platform. | null | null | 10.18653/v1/2022.findings-emnlp.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,535 |
inproceedings | lei-etal-2022-assistsr | {A}ssist{SR}: Task-oriented Video Segment Retrieval for Personal {AI} Assistant | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.24/ | Lei, Weixian and Gao, Difei and Wang, Yuxuan and Mao, Dongxing and Liang, Zihan and Ran, Lingmin and Shou, Mike Zheng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 319--338 | It is still a pipe dream that personal AI assistants on the phone and AR glasses can assist our daily life in addressing our questions like {\textquotedblleft}how to adjust the date for this watch?{\textquotedblright} and {\textquotedblleft}how to set its heating duration? (while pointing at an oven){\textquotedblright}. The queries used in conventional tasks (i.e. Video Question Answering, Video Retrieval, Moment Localization) are often factoid and based on pure text. In contrast, we present a new task called Task-oriented Question-driven Video Segment Retrieval (TQVSR). Each of our questions is an image-box-text query that focuses on affordance of items in our daily life and expects relevant answer segments to be retrieved from a corpus of instructional video-transcript segments. To support the study of this TQVSR task, we construct a new dataset called AssistSR. We design novel guidelines to create high-quality samples. This dataset contains 3.2k multimodal questions on 1.6k video segments from instructional videos on diverse daily-used items. To address TQVSR, we develop a simple yet effective model called Dual Multimodal Encoders (DME) that significantly outperforms several baseline methods while still having large room for improvement in the future. Moreover, we present detailed ablation analyses. Code and data are available at https://github.com/StanLei52/TQVSR. | null | null | 10.18653/v1/2022.findings-emnlp.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,536 |
inproceedings | zhang-etal-2022-dim | Dim-Krum: Backdoor-Resistant Federated Learning for {NLP} with Dimension-wise Krum-Based Aggregation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.25/ | Zhang, Zhiyuan and Su, Qi and Sun, Xu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 339--354 | Despite the potential of federated learning, it is known to be vulnerable to backdoor attacks. Many robust federated aggregation methods are proposed to reduce the potential backdoor risk. However, they are mainly validated in the CV field. In this paper, we find that NLP backdoors are hard to defend against than CV, and we provide a theoretical analysis that the malicious update detection error probabilities are determined by the relative backdoor strengths. NLP attacks tend to have small relative backdoor strengths, which may result in the failure of robust federated aggregation methods for NLP attacks. Inspired by the theoretical results, we can choose some dimensions with higher backdoor strengths to settle this issue. We propose a novel federated aggregation algorithm, Dim-Krum, for NLP tasks, and experimental results validate its effectiveness. | null | null | 10.18653/v1/2022.findings-emnlp.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,537 |
inproceedings | zhang-etal-2022-fine-mixing | Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.26/ | Zhang, Zhiyuan and Lyu, Lingjuan and Ma, Xingjun and Wang, Chenguang and Sun, Xu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 355--372 | Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks. In Natural Language Processing (NLP), DNNs are often backdoored during the fine-tuning process of a large-scale Pre-trained Language Model (PLM) with poisoned samples. Although the clean weights of PLMs are readily available, existing methods have ignored this information in defending NLP models against backdoor attacks. In this work, we take the first step to exploit the pre-trained (unfine-tuned) weights to mitigate backdoors in fine-tuned language models. Specifically, we leverage the clean pre-trained weights via two complementary techniques: (1) a two-step Fine-mixing technique, which first mixes the backdoored weights (fine-tuned on poisoned data) with the pre-trained weights, then fine-tunes the mixed weights on a small subset of clean data; (2) an Embedding Purification (E-PUR) technique, which mitigates potential backdoors existing in the word embeddings. We compare Fine-mixing with typical backdoor mitigation methods on three single-sentence sentiment classification tasks and two sentence-pair classification tasks and show that it outperforms the baselines by a considerable margin in all scenarios. We also show that our E-PUR method can benefit existing mitigation methods. Our work establishes a simple but strong baseline defense for secure fine-tuned NLP models against backdoor attacks. | null | null | 10.18653/v1/2022.findings-emnlp.26 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,538 |
inproceedings | shuster-etal-2022-language | Language Models that Seek for Knowledge: Modular Search {\&} Generation for Dialogue and Prompt Completion | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.27/ | Shuster, Kurt and Komeili, Mojtaba and Adolphs, Leonard and Roller, Stephen and Szlam, Arthur and Weston, Jason | Findings of the Association for Computational Linguistics: EMNLP 2022 | 373--393 | Language models (LMs) have recently been shown to generate more factual responses by employing modularity (Zhou et al., 2022) in combination with retrieval (Adolphs et al., 2021). We extend the recent approach of Adolphs et al. (2021) to include internet search as a module. Our SeeKeR (Search engine-{\ensuremath{>}}Knowledge-{\ensuremath{>}}Response) method thus applies a single LM to three modular tasks in succession: search, generating knowledge, and generating a final response. We show that, when using SeeKeR as a dialogue model, it outperforms the state-of-the-art model BlenderBot 2 (Chen et al., 2021) on open-domain knowledge-grounded conversations for the same number of parameters, in terms of consistency, knowledge and per-turn engagingness. SeeKeR applied to topical prompt completions as a standard language model outperforms GPT2 (Radford et al., 2019) and GPT3 (Brown et al., 2020) in terms of factuality and topicality, despite GPT3 being a vastly larger model. Our code and models are made publicly available. | null | null | 10.18653/v1/2022.findings-emnlp.27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,539 |
inproceedings | schuster-etal-2022-stretching | Stretching Sentence-pair {NLI} Models to Reason over Long Documents and Clusters | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.28/ | Schuster, Tal and Chen, Sihao and Buthpitiya, Senaka and Fabrikant, Alex and Metzler, Donald | Findings of the Association for Computational Linguistics: EMNLP 2022 | 394--412 | Natural Language Inference (NLI) has been extensively studied by the NLP community as a framework for estimating the semantic relation between sentence pairs. While early work identified certain biases in NLI models, recent advancements in modeling and datasets demonstrated promising performance.In this work, we further explore the direct zero-shot applicability of NLI models to real applications, beyond the sentence-pair setting they were trained on. First, we analyze the robustness of these models to longer and out-of-domain inputs. Then, we develop new aggregation methods to allow operating over full documents, reaching state-of-the-art performance on the ContractNLI dataset. Interestingly, we find NLI scores to provide strong retrieval signals, leading to more relevant evidence extractions compared to common similarity-based methods. Finally, we go further and investigate whole document clusters to identify both discrepancies and consensus among sources. In a test case, we find real inconsistencies between Wikipedia pages in different languages about the same topic. | null | null | 10.18653/v1/2022.findings-emnlp.28 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,540 |
inproceedings | xu-etal-2022-towards-realistic | Towards Realistic Low-resource Relation Extraction: A Benchmark with Empirical Baseline Study | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.29/ | Xu, Xin and Chen, Xiang and Zhang, Ningyu and Xie, Xin and Chen, Xi and Chen, Huajun | Findings of the Association for Computational Linguistics: EMNLP 2022 | 413--427 | This paper presents an empirical study to build relation extraction systems in low-resource settings. Based upon recent pre-trained language models, we comprehensively investigate three schemes to evaluate the performance in low-resource settings: (i) different types of prompt-based methods with few-shot labeled data; (ii) diverse balancing methods to address the long-tailed distribution issue; (iii) data augmentation technologies and self-training to generate more labeled in-domain data. We create a benchmark with 8 relation extraction (RE) datasets covering different languages, domains and contexts and perform extensive comparisons over the proposed schemes with combinations. Our experiments illustrate: (i) Though prompt-based tuning is beneficial in low-resource RE, there is still much potential for improvement, especially in extracting relations from cross-sentence contexts with multiple relational triples; (ii) Balancing methods are not always helpful for RE with long-tailed distribution; (iii) Data augmentation complements existing baselines and can bring much performance gain, while self-training may not consistently achieve advancement to low-resource RE. Code and datasets are in https://github.com/zjunlp/LREBench. | null | null | 10.18653/v1/2022.findings-emnlp.29 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,541 |
inproceedings | zhang-etal-2022-clle | {CLLE}: A Benchmark for Continual Language Learning Evaluation in Multilingual Machine Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.30/ | Zhang, Han and Zhang, Sheng and Xiang, Yang and Liang, Bin and Su, Jinsong and Miao, Zhongjian and Wang, Hui and Xu, Ruifeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 428--443 | Continual Language Learning (CLL) in multilingual translation is inevitable when new languages are required to be translated. Due to the lack of unified and generalized benchmarks, the evaluation of existing methods is greatly influenced by experimental design which usually has a big gap from the industrial demands. In this work, we propose the first Continual Language Learning Evaluation benchmark CLLE in multilingual translation. CLLE consists of a Chinese-centric corpus {---} CN-25 and two CLL tasks {---} the close-distance language continual learning task and the language family continual learning task designed for real and disparate demands. Different from existing translation benchmarks, CLLE considers several restrictions for CLL, including domain distribution alignment, content overlap, language diversity, and the balance of corpus. Furthermore, we propose a novel framework COMETA based on Constrained Optimization and META-learning to alleviate catastrophic forgetting and dependency on history training data by using a meta-model to retain the important parameters for old languages. Our experiments prove that CLLE is a challenging CLL benchmark and that our proposed method is effective when compared with other strong baselines. Due to the construction of the corpus, the task designing and the evaluation method are independent of the centric language, we also construct and release the English-centric corpus EN-25 to facilitate academic research. | null | null | 10.18653/v1/2022.findings-emnlp.30 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,542 |
inproceedings | ren-etal-2022-lexicon | Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.31/ | Ren, Houxing and Shou, Linjun and Pei, Jian and Wu, Ning and Gong, Ming and Jiang, Daxin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 444--459 | Recent multilingual pre-trained models have shown better performance in various multilingual tasks. However, these models perform poorly on multilingual retrieval tasks due to lacking multilingual training data. In this paper, we propose to mine and generate self-supervised training data based on a large-scale unlabeled corpus. We carefully design a mining method which combines the sparse and dense models to mine the relevance of unlabeled queries and passages. And we introduce a query generator to generate more queries in target languages for unlabeled passages. Through extensive experiments on Mr. TYDI dataset and an industrial dataset from a commercial search engine, we demonstrate that our method performs better than baselines based on various pre-trained multilingual models. Our method even achieves on-par performance with the supervised method on the latter dataset. | null | null | 10.18653/v1/2022.findings-emnlp.31 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,543 |
inproceedings | liu-etal-2022-improve | Improve Interpretability of Neural Networks via Sparse Contrastive Coding | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.32/ | Liu, Junhong and Lin, Yijie and Jiang, Liang and Liu, Jia and Wen, Zujie and Peng, Xi | Findings of the Association for Computational Linguistics: EMNLP 2022 | 460--470 | Although explainable artificial intelligence (XAI) has achieved remarkable developments in recent years, there are few efforts have been devoted to the following problems, namely, i) how to develop an explainable method that could explain the black-box in a model-agnostic way? and ii) how to improve the performance and interpretability of the black-box using such explanations instead of pre-collected important attributions? To explore the potential solution, we propose a model-agnostic explanation method termed as Sparse Contrastive Coding (SCC) and verify its effectiveness in text classification and natural language inference. In brief, SCC explains the feature attributions which characterize the importance of words based on the hidden states of each layer of the model. With such word-level explainability, SCC adaptively divides the input sentences into foregrounds and backgrounds in terms of task relevance. Through maximizing the similarity between the foregrounds and input sentences while minimizing the similarity between the backgrounds and input sentences, SSC employs a supervised contrastive learning loss to boost the interpretability and performance of the model. Extensive experiments show the superiority of our method over five state-of-the-art methods in terms of interpretability and classification measurements. The code is available at https://pengxi.me. | null | null | 10.18653/v1/2022.findings-emnlp.32 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,544 |
inproceedings | shi-etal-2022-lemon | {LEMON}: Language-Based Environment Manipulation via Execution-Guided Pre-training | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.33/ | Shi, Qi and Liu, Qian and Chen, Bei and Zhang, Yu and Liu, Ting and Lou, Jian-Guang | Findings of the Association for Computational Linguistics: EMNLP 2022 | 471--485 | Language-based environment manipulation requires agents to manipulate the environment following natural language instructions, which is challenging due to the huge space of the environments.To address this challenge, various approaches have been proposed in recent work. Although these approaches work well for their intended environments, they are difficult to generalize across environments. In this work, we propose LEMON, a general framework for language-based environment manipulation tasks. Specifically, we first specify a general approach for language-based environment manipulation tasks, which can deal with various environments using the same generative language model. Then we propose an execution-guided pre-training strategy to inject prior knowledge of environments to the language model with a pure synthetic pre-training corpus. Experimental results on tasks including Alchemy, Scene, Tangrams, ProPara and Recipes demonstrate the effectiveness of LEMON: it achieves new state-of-the-art results on four of the tasks, and the execution-guided pre-training strategy brings remarkable improvements on all experimental tasks. | null | null | 10.18653/v1/2022.findings-emnlp.33 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,545 |
inproceedings | yang-etal-2022-crop | {CROP}: Zero-shot Cross-lingual Named Entity Recognition with Multilingual Labeled Sequence Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.34/ | Yang, Jian and Huang, Shaohan and Ma, Shuming and Yin, Yuwei and Dong, Li and Zhang, Dongdong and Guo, Hongcheng and Li, Zhoujun and Wei, Furu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 486--496 | Named entity recognition (NER) suffers from the scarcity of annotated training data, especially for low-resource languages without labeled data. Cross-lingual NER has been proposed to alleviate this issue by transferring knowledge from high-resource languages to low-resource languages via aligned cross-lingual representations or machine translation results. However, the performance of cross-lingual NER methods is severely affected by the unsatisfactory quality of translation or label projection. To address these problems, we propose a Cross-lingual Entity Projection framework (CROP) to enable zero-shot cross-lingual NER with the help of a multilingual labeled sequence translation model. Specifically, the target sequence is first translated into the source language and then tagged by a source NER model. We further adopt a labeled sequence translation model to project the tagged sequence back to the target language and label the target raw sentence. Ultimately, the whole pipeline is integrated into an end-to-end model by the way of self-training. Experimental results on two benchmarks demonstrate that our method substantially outperforms the previous strong baseline by a large margin of +3 7 F1 scores and achieves state-of-the-art performance. | null | null | 10.18653/v1/2022.findings-emnlp.34 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,546 |
inproceedings | kirk-etal-2022-handling | Handling and Presenting Harmful Text in {NLP} Research | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.35/ | Kirk, Hannah and Birhane, Abeba and Vidgen, Bertie and Derczynski, Leon | Findings of the Association for Computational Linguistics: EMNLP 2022 | 497--510 | Text data can pose a risk of harm. However, the risks are not fully understood, and how to handle, present, and discuss harmful text in a safe way remains an unresolved issue in the NLP community. We provide an analytical framework categorising harms on three axes: (1) the harm type (e.g., misinformation, hate speech or racial stereotypes); (2) whether a harm is sought as a feature of the research design if explicitly studying harmful content (e.g., training a hate speech classifier), versus unsought if harmful content is encountered when working on unrelated problems (e.g., language generation or part-of-speech tagging); and (3) who it affects, from people (mis)represented in the data to those handling the data and those publishing on the data. We provide advice for practitioners, with concrete steps for mitigating harm in research and in publication. To assist implementation we introduce HarmCheck {--} a documentation standard for handling and presenting harmful text in research. | null | null | 10.18653/v1/2022.findings-emnlp.35 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,547 |
inproceedings | lin-hu-2022-multimodal | Multimodal Contrastive Learning via Uni-Modal Coding and Cross-Modal Prediction for Multimodal Sentiment Analysis | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.36/ | Lin, Ronghao and Hu, Haifeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 511--523 | Multimodal representation learning is a challenging task in which previous work mostly focus on either uni-modality pre-training or cross-modality fusion. In fact, we regard modeling multimodal representation as building a skyscraper, where laying stable foundation and designing the main structure are equally essential. The former is like encoding robust uni-modal representation while the later is like integrating interactive information among different modalities, both of which are critical to learning an effective multimodal representation. Recently, contrastive learning has been successfully applied in representation learning, which can be utilized as the pillar of the skyscraper and benefit the model to extract the most important features contained in the multimodal data. In this paper, we propose a novel framework named MultiModal Contrastive Learning (MMCL) for multimodal representation to capture intra- and inter-modality dynamics simultaneously. Specifically, we devise uni-modal contrastive coding with an efficient uni-modal feature augmentation strategy to filter inherent noise contained in acoustic and visual modality and acquire more robust uni-modality representations. Besides, a pseudo siamese network is presented to predict representation across different modalities, which successfully captures cross-modal dynamics. Moreover, we design two contrastive learning tasks, instance- and sentiment-based contrastive learning, to promote the process of prediction and learn more interactive information related to sentiment. Extensive experiments conducted on two public datasets demonstrate that our method surpasses the state-of-the-art methods. | null | null | 10.18653/v1/2022.findings-emnlp.36 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,548 |
inproceedings | wang-etal-2022-towards-unified | Towards Unified Prompt Tuning for Few-shot Text Classification | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.37/ | Wang, Jianing and Wang, Chengyu and Luo, Fuli and Tan, Chuanqi and Qiu, Minghui and Yang, Fei and Shi, Qiuhui and Huang, Songfang and Gao, Ming | Findings of the Association for Computational Linguistics: EMNLP 2022 | 524--536 | Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts. Yet, PLMs are unfamiliar with prompt-style expressions during pre-training, which limits the few-shot learning performance on downstream tasks.It would be desirable if the models can acquire some prompting knowledge before adapting to specific NLP tasks. We present the Unified Prompt Tuning (UPT) framework, leading to better few-shot text classification for BERT-style models by explicitly capturing prompting semantics from non-target NLP datasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for joint prompt learning across different NLP tasks, forcing PLMs to capture task-invariant prompting knowledge. We further design a self-supervised task named Knowledge-enhanced Selective Masked Language Modeling to improve the PLM`s generalization abilities for accurate adaptation to previously unseen tasks. After multi-task learning across multiple tasks, the PLM can be better prompt-tuned towards any dissimilar target tasks in low-resourced settings. Experiments over a variety of NLP tasks show that UPT consistently outperforms state-of-the-arts for prompt-based fine-tuning. | null | null | 10.18653/v1/2022.findings-emnlp.37 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,549 |
inproceedings | lampinen-etal-2022-language | Can language models learn from explanations in context? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.38/ | Lampinen, Andrew and Dasgupta, Ishita and Chan, Stephanie and Mathewson, Kory and Tessler, Mh and Creswell, Antonia and McClelland, James and Wang, Jane and Hill, Felix | Findings of the Association for Computational Linguistics: EMNLP 2022 | 537--563 | Language Models (LMs) can perform new tasks by adapting to a few in-context examples. For humans, explanations that connect examples to task principles can improve learning. We therefore investigate whether explanations of few-shot examples can help LMs. We annotate questions from 40 challenging tasks with answer explanations, and various matched control explanations. We evaluate how different types of explanations, instructions, and controls affect zero- and few-shot performance. We analyze these results using statistical multilevel modeling techniques that account for the nested dependencies among conditions, tasks, prompts, and models. We find that explanations can improve performance{---}even without tuning. Furthermore, explanations hand-tuned for performance on a small validation set offer substantially larger benefits, and building a prompt by selecting examples and explanations together substantially improves performance over selecting examples alone. Finally, even untuned explanations outperform carefully matched controls, suggesting that the benefits are due to the link between an example and its explanation, rather than lower-level features. However, only large models benefit. In summary, explanations can support the in-context learning of large LMs on challenging tasks. | null | null | 10.18653/v1/2022.findings-emnlp.38 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,550 |
inproceedings | liu-etal-2022-gnn | {GNN}-encoder: Learning a Dual-encoder Architecture via Graph Neural Networks for Dense Passage Retrieval | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.39/ | Liu, Jiduan and Liu, Jiahao and Yang, Yang and Wang, Jingang and Wu, Wei and Zhao, Dongyan and Yan, Rui | Findings of the Association for Computational Linguistics: EMNLP 2022 | 564--575 | Recently, retrieval models based on dense representations are dominant in passage retrieval tasks, due to their outstanding ability in terms of capturing semantics of input text compared to the traditional sparse vector space models. A common practice of dense retrieval models is to exploit a dual-encoder architecture to represent a query and a passage independently. Though efficient, such a structure loses interaction between the query-passage pair, resulting in inferior accuracy. To enhance the performance of dense retrieval models without loss of efficiency, we propose a GNN-encoder model in which query (passage) information is fused into passage (query) representations via graph neural networks that are constructed by queries and their top retrieved passages. By this means, we maintain a dual-encoder structure, and retain some interaction information between query-passage pairs in their representations, which enables us to achieve both efficiency and efficacy in passage retrieval. Evaluation results indicate that our method significantly outperforms the existing models on MSMARCO, Natural Questions and TriviaQA datasets, and achieves the new state-of-the-art on these datasets. | null | null | 10.18653/v1/2022.findings-emnlp.39 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,551 |
inproceedings | ma-etal-2022-linguistic | Linguistic Rules-Based Corpus Generation for Native {C}hinese Grammatical Error Correction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.40/ | Ma, Shirong and Li, Yinghui and Sun, Rongyi and Zhou, Qingyu and Huang, Shulin and Zhang, Ding and Yangning, Li and Liu, Ruiyang and Li, Zhongli and Cao, Yunbo and Zheng, Haitao and Shen, Ying | Findings of the Association for Computational Linguistics: EMNLP 2022 | 576--589 | Chinese Grammatical Error Correction (CGEC) is both a challenging NLP task and a common application in human daily life. Recently, many data-driven approaches are proposed for the development of CGEC research. However, there are two major limitations in the CGEC field: First, the lack of high-quality annotated training corpora prevents the performance of existing CGEC models from being significantly improved. Second, the grammatical errors in widely used test sets are not made by native Chinese speakers, resulting in a significant gap between the CGEC models and the real application. In this paper, we propose a linguistic rules-based approach to construct large-scale CGEC training corpora with automatically generated grammatical errors. Additionally, we present a challenging CGEC benchmark derived entirely from errors made by native Chinese speakers in real-world scenarios. Extensive experiments and detailed analyses not only demonstrate that the training data constructed by our method effectively improves the performance of CGEC models, but also reflect that our benchmark is an excellent resource for further development of the CGEC field. | null | null | 10.18653/v1/2022.findings-emnlp.40 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,552 |
inproceedings | zhu-etal-2022-rethinking | Rethinking the Video Sampling and Reasoning Strategies for Temporal Sentence Grounding | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.41/ | Zhu, Jiahao and Liu, Daizong and Zhou, Pan and Di, Xing and Cheng, Yu and Yang, Song and Xu, Wenzheng and Xu, Zichuan and Wan, Yao and Sun, Lichao and Xiong, Zeyu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 590--600 | Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then interact them with query for reasoning.However, we argue that these methods have overlooked two indispensable issues:1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries.2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model.To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding.Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets. | null | null | 10.18653/v1/2022.findings-emnlp.41 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,553 |
inproceedings | hua-zhang-2022-system | System 1 + System 2 = Better World: Neural-Symbolic Chain of Logic Reasoning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.42/ | Hua, Wenyue and Zhang, Yongfeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 601--612 | Logical reasoning is a challenge for many current NLP neural network models since it requires more than the ability of learning informative representations from data. Inspired by the Dual Process Theory in cognitive science {---} which proposes that human cognition process involves two stages: an intuitive, unconscious and fast process relying on perception calledSystem 1, and a logical, conscious and slow process performing complex reasoning called System 2 {---} we leverage neural logic reasoning (System 2) on top of the representation learning models (System 1), which conducts explicit neural-based differentiable logical reasoning on top of the representations learned by the base neural models. Based on experiments on the commonsense knowledge graph completion task, we show that the two-system architecture always improves from its System 1 model alone. Experiments also show that both the rule-driven logical regularizer and the data-driven value regularizer are important and the performance improvement is marginal without the two regularizers, which indicates that learning from both logical prior and training data is important for reasoning tasks. | null | null | 10.18653/v1/2022.findings-emnlp.42 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,554 |
inproceedings | zhang-etal-2022-efficient-federated | Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.43/ | Zhang, Kai and Wang, Yu and Wang, Hongyi and Huang, Lifu and Yang, Carl and Chen, Xun and Sun, Lichao | Findings of the Association for Computational Linguistics: EMNLP 2022 | 613--621 | Federated learning (FL) can be essential in knowledge representation, reasoning, and data mining applications over multi-source knowledge graphs (KGs). A recent study FedE first proposes an FL framework that shares entity embeddings of KGs across all clients. However, entity embedding sharing from FedE would incur a severe privacy leakage. Specifically, the known entity embedding can be used to infer whether a specific relation between two entities exists in a private client. In this paper, we introduce a novel attack method that aims to recover the original data based on the embedding information, which is further used to evaluate the vulnerabilities of FedE. Furthermore, we propose a Federated learning paradigm with privacy-preserving Relation embedding aggregation (FedR) to tackle the privacy issue in FedE. Besides, relation embedding sharing can significantly reduce the communication cost due to its smaller size of queries. We conduct extensive experiments to evaluate FedR with five different KG embedding models and three datasets. Compared to FedE, FedR achieves similar utility and significant improvements regarding privacy-preserving effect and communication efficiency on the link prediction task. | null | null | 10.18653/v1/2022.findings-emnlp.43 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,555 |
inproceedings | yu-etal-2022-texthacker | {T}ext{H}acker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.44/ | Yu, Zhen and Wang, Xiaosen and Che, Wanxiang and He, Kun | Findings of the Association for Computational Linguistics: EMNLP 2022 | 622--637 | Existing textual adversarial attacks usually utilize the gradient or prediction confidence to generate adversarial examples, making it hard to be deployed in real-world applications. To this end, we consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker can only access the prediction label. In particular, we find we can learn the importance of different words via the change on prediction label caused by word substitutions on the adversarial examples. Based on this observation, we propose a novel adversarial attack, termed Text Hard-label attacker (TextHacker). TextHacker randomly perturbs lots of words to craft an adversarial example. Then, TextHacker adopts a hybrid local search algorithm with the estimation of word importance from the attack history to minimize the adversarial perturbation. Extensive evaluations for text classification and textual entailment show that TextHacker significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality. | null | null | 10.18653/v1/2022.findings-emnlp.44 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,556 |
inproceedings | yang-etal-2022-visualizing | Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.45/ | Yang, Yue and Panagopoulou, Artemis and Apidianaki, Marianna and Yatskar, Mark and Callison-Burch, Chris | Findings of the Association for Computational Linguistics: EMNLP 2022 | 638--655 | Neural language models encode rich knowledge about entities and their relationships which can be extracted from their representations using probing. Common properties of nouns (e.g., red strawberries, small ant) are, however, more challenging to extract compared to other types of knowledge because they are rarely explicitly stated in texts.We hypothesize this to mainly be the case for perceptual properties which are obvious to the participants in the communication. We propose to extract these properties from images and use them in an ensemble model, in order to complement the information that is extracted from language models. We consider perceptual properties to be more concrete than abstract properties (e.g., interesting, flawless). We propose to use the adjectives' concreteness score as a lever to calibrate the contribution of each source (text vs. images). We evaluate our ensemble model in a ranking task where the actual properties of a noun need to be ranked higher than other non-relevant properties. Our results show that the proposed combination of text and images greatly improves noun property prediction compared to powerful text-based language models. | null | null | 10.18653/v1/2022.findings-emnlp.45 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,557 |
inproceedings | feng-ma-2022-better | It`s Better to Teach Fishing than Giving a Fish: An Auto-Augmented Structure-aware Generative Model for Metaphor Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.46/ | Feng, Huawen and Ma, Qianli | Findings of the Association for Computational Linguistics: EMNLP 2022 | 656--667 | Metaphor Detection aims to identify the metaphorical meaning of words in the sentence. Most existing work is discriminant models, which use the contextual semantic information extracted by transformers for classifications directly. Due to insufficient training data and corresponding paraphrases, recent methods focus on how to get external resources and utilize them to introduce more knowledge. Currently, contextual modeling and external data are two key issues in the field. In this paper, we propose **A**n **A**uto-**A**ugmented **S**tructure-aware generative model (**AAAS**) for metaphor detection, which transforms the classification task into a keywords-extraction task. Specifically, we propose the task of structure information extraction to allow the model to use the {\textquoteleft}structural language' to describe the whole sentence. Furthermore, without any other external resources, we design a simple but effective auto-augmented method to expand the limited datasets. Experimental results show that **AAAS** obtains competitive results compared with state-of-the-art methods. | null | null | 10.18653/v1/2022.findings-emnlp.46 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,558 |
inproceedings | chen-etal-2022-expose | Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.47/ | Chen, Sishuo and Yang, Wenkai and Zhang, Zhiyuan and Bi, Xiaohan and Sun, Xu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 668--683 | Natural language processing (NLP) models are known to be vulnerable to backdoor attacks, which poses a newly arisen threat to NLP models. Prior online backdoor defense methods for NLP models only focus on the anomalies at either the input or output level, still suffering from fragility to adaptive attacks and high computational cost. In this work, we take the first step to investigate the unconcealment of textual poisoned samples at the intermediate-feature level and propose a feature-based efficient online defense method. Through extensive experiments on existing attacking methods, we find that the poisoned samples are far away from clean samples in the intermediate feature space of a poisoned NLP model. Motivated by this observation, we devise a distance-based anomaly score (DAN) to distinguish poisoned samples from clean samples at the feature level. Experiments on sentiment analysis and offense detection tasks demonstrate the superiority of DAN, as it substantially surpasses existing online defense methods in terms of defending performance and enjoys lower inference costs. Moreover, we show that DAN is also resistant to adaptive attacks based on feature-level regularization. Our code is available at https://github.com/lancopku/DAN. | null | null | 10.18653/v1/2022.findings-emnlp.47 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,559 |
inproceedings | das-etal-2022-diving | Diving Deep into Modes of Fact Hallucinations in Dialogue Systems | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.48/ | Das, Souvik and Saha, Sougata and Srihari, Rohini | Findings of the Association for Computational Linguistics: EMNLP 2022 | 684--699 | Knowledge Graph(KG) grounded conversations often use large pre-trained models and usually suffer from fact hallucination. Frequently entities with no references in knowledge sources and conversation history are introduced into responses, thus hindering the flow of the conversation{---}existing work attempt to overcome this issue by tweaking the training procedure or using a multi-step refining method. However, minimal effort is put into constructing an entity-level hallucination detection system, which would provide fine-grained signals that control fallacious content while generating responses. As a first step to address this issue, we dive deep to identify various modes of hallucination in KG-grounded chatbots through human feedback analysis. Secondly, we propose a series of perturbation strategies to create a synthetic dataset named FADE (FActual Dialogue Hallucination DEtection Dataset). Finally, we conduct comprehensive data analyses and create multiple baseline models for hallucination detection to compare against human-verified data and already established benchmarks. | null | null | 10.18653/v1/2022.findings-emnlp.48 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,560 |
inproceedings | wu-etal-2022-representation | Representation Learning for Resource-Constrained Keyphrase Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.49/ | Wu, Di and Ahmad, Wasi and Dev, Sunipa and Chang, Kai-Wei | Findings of the Association for Computational Linguistics: EMNLP 2022 | 700--716 | State-of-the-art keyphrase generation methods generally depend on large annotated datasets, limiting their performance in domains with limited annotated data. To overcome this challenge, we design a data-oriented approach that first identifies salient information using retrieval-based corpus-level statistics, and then learns a task-specific intermediate representation based on a pre-trained language model using large-scale unlabeled documents. We introduce salient span recovery and salient span prediction as denoising training objectives that condense the intra-article and inter-article knowledge essential for keyphrase generation. Through experiments on multiple keyphrase generation benchmarks, we show the effectiveness of the proposed approach for facilitating low-resource keyphrase generation and zero-shot domain adaptation. Our method especially benefits the generation of absent keyphrases, approaching the performance of models trained with large training sets. | null | null | 10.18653/v1/2022.findings-emnlp.49 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,561 |
inproceedings | li-etal-2022-systematicity | Systematicity in {GPT}-3`s Interpretation of Novel {E}nglish Noun Compounds | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.50/ | Li, Siyan and Carlson, Riley and Potts, Christopher | Findings of the Association for Computational Linguistics: EMNLP 2022 | 717--728 | Levin et al. (2019) show experimentally that the interpretations of novel English noun compounds (e.g., stew skillet), while not fully compositional, are highly predictable based on whether the modifier and head refer to artifacts or natural kinds. Is the large language model GPT-3 governed by the same interpretive principles? To address this question, we first compare Levin et al.`s experimental data with GPT-3 generations, finding a high degree of similarity. However, this evidence is consistent with GPT-3 reasoning only about specific lexical items rather than the more abstract conceptual categories of Levin et al.`s theory. To probe more deeply, we construct prompts that require the relevant kind of conceptual reasoning. Here, we fail to find convincing evidence that GPT-3 is reasoning about more than just individual lexical items. These results highlight the importance of controlling for low-level distributional regularities when assessing whether a large language model latently encodes a deeper theory. | null | null | 10.18653/v1/2022.findings-emnlp.50 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,562 |
inproceedings | wang-etal-2022-care | {CARE}: Causality Reasoning for Empathetic Responses by Conditional Graph Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.51/ | Wang, Jiashuo and Cheng, Yi and Li, Wenjie | Findings of the Association for Computational Linguistics: EMNLP 2022 | 729--741 | Recent approaches to empathetic response generation incorporate emotion causalities to enhance comprehension of both the user`s feelings and experiences. However, these approaches suffer from two critical issues. First, they only consider causalities between the user`s emotion and the user`s experiences, and ignore those between the user`s experiences. Second, they neglect interdependence among causalities and reason them independently. To solve the above problems, we expect to reason all plausible causalities interdependently and simultaneously, given the user`s emotion, dialogue history, and future dialogue content. Then, we infuse these causalities into response generation for empathetic responses. Specifically, we design a new model, i.e., the Conditional Variational Graph Auto-Encoder (CVGAE), for the causality reasoning, and adopt a multi-source attention mechanism in the decoder for the causality infusion. We name the whole framework as CARE, abbreviated for CAusality Reasoning for Empathetic conversation. Experimental results indicate that our method achieves state-of-the-art performance. | null | null | 10.18653/v1/2022.findings-emnlp.51 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,563 |
inproceedings | zhao-etal-2022-transadv | {T}rans{A}dv: A Translation-based Adversarial Learning Framework for Zero-Resource Cross-Lingual Named Entity Recognition | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.52/ | Zhao, Yichun and Du, Jintao and Liu, Gongshen and Zhu, Huijia | Findings of the Association for Computational Linguistics: EMNLP 2022 | 742--749 | Zero-Resource Cross-Lingual Named Entity Recognition aims at training an NER model of the target language using only labeled source language data and unlabeled target language data. Existing methods are mainly divided into three categories: model transfer based, data transfer based and knowledge transfer based. Each method has its own disadvantages, and combining more than one of them often leads to better performance. However, the performance of data transfer based methods is often limited by inevitable noise in the translation process. To handle the problem, we propose a framework named TransAdv to mitigate lexical and syntactic errors of word-by-word translated data, better utilizing the data by multi-level adversarial learning and multi-model knowledge distillation. Extensive experiments are conducted over 6 target languages with English as the source language, and the results show that TransAdv achieves competitive performance to the state-of-the-art models. | null | null | 10.18653/v1/2022.findings-emnlp.52 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,564 |
inproceedings | duan-etal-2022-barle | {BARLE}: Background-Aware Representation Learning for Background Shift Out-of-Distribution Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.53/ | Duan, Hanyu and Yang, Yi and Abbasi, Ahmed and Tam, Kar Yan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 750--764 | Machine learning models often suffer from a performance drop when they are applied to out-of-distribution (OOD) samples, i.e., those drawn far away from the training data distribution. Existing OOD detection work mostly focuses on identifying semantic-shift OOD samples, e.g., instances from unseen new classes. However, background-shift OOD detection, which identifies samples with domain or style-change, represents a more practical yet challenging task. In this paper, we propose Background-Aware Representation Learning (BARLE) for background-shift OOD detection in NLP. Specifically, we generate semantics-preserving background-shifted pseudo OOD samples from pretrained masked language models. We then contrast the in-distribution (ID) samples with their pseudo OOD counterparts. Unlike prior semantic-shift OOD detection work that often leverages an external text corpus, BARLE only uses ID data, which is more flexible and cost-efficient. In experiments across several text classification tasks, we demonstrate that BARLE is capable of improving background-shift OOD detection performance while maintaining ID classification accuracy. We further investigate the properties of the generated pseudo OOD samples, uncovering the working mechanism of BARLE. | null | null | 10.18653/v1/2022.findings-emnlp.53 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,565 |
inproceedings | le-scao-etal-2022-language | What Language Model to Train if You Have One Million {GPU} Hours? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.54/ | Le Scao, Teven and Wang, Thomas and Hesslow, Daniel and Bekman, Stas and Bari, M Saiful and Biderman, Stella and Elsahar, Hady and Muennighoff, Niklas and Phang, Jason and Press, Ofir and Raffel, Colin and Sanh, Victor and Shen, Sheng and Sutawika, Lintang and Tae, Jaesung and Yong, Zheng Xin and Launay, Julien and Beltagy, Iz | Findings of the Association for Computational Linguistics: EMNLP 2022 | 765--782 | The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations can transfer across tasks and scale, increasing the impact of modeling research. However, with the emergence of state-of-the-art 100B+ parameters models, large language models are increasingly expensive to accurately design and train. Notably, it can be difficult to evaluate how modeling decisions may impact emergent capabilities, given that these capabilities arise mainly from sheer scale alone.In the process of building BLOOM{--}the Big Science Large Open-science Open-access Multilingual language model{--}our goal is to identify an architecture and training setup that makes the best use of our 1,000,000 A100-GPU-hours budget.Specifically, we perform an ablation study at the billion-parameter scale comparing different modeling practices and their impact on zero-shot generalization.In addition, we study the impact of various popular pre-training corpora on zero-shot generalization. We also study the performance of a multilingual model and how it compares to the English-only one. Finally, we consider the scaling behaviour of Transformers to choose the target model size, shape, and training setup. All our models and code are open-sourced at https://huggingface.co/bigscience. | null | null | 10.18653/v1/2022.findings-emnlp.54 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,566 |
inproceedings | cho-etal-2022-enhancing | Enhancing Out-of-Distribution Detection in Natural Language Understanding via Implicit Layer Ensemble | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.55/ | Cho, Hyunsoo and Park, Choonghyun and Kang, Jaewook and Yoo, Kang Min and Kim, Taeuk and Lee, Sang-goo | Findings of the Association for Computational Linguistics: EMNLP 2022 | 783--798 | Out-of-distribution (OOD) detection aims to discern outliers from the intended data distribution, which is crucial to maintaining high reliability and a good user experience.Most recent studies in OOD detection utilize the information from a single representation that resides in the penultimate layer to determine whether the input is anomalous or not.Although such a method is straightforward, the potential of diverse information in the intermediate layers is overlooked.In this paper, we propose a novel framework based on contrastive learning that encourages intermediate features to learn layer-specialized representations and assembles them implicitly into a single representation to absorb rich information in the pre-trained language model. Extensive experiments in various intent classification and OOD datasets demonstrate that our approach is significantly more effective than other works. | null | null | 10.18653/v1/2022.findings-emnlp.55 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,567 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.