entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | kosyak-tyers-2022-predictive | Predictive Text for Agglutinative and Polysynthetic Languages | Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Klyachko, Elena and Neminova, Ekaterina and Vylomova, Ekaterina and Shavrina, Tatiana and Ferrand, Eric Le and Malykh, Valentin and Tyers, Francis and Arkhangelskiy, Timofey and Mikhailov, Vladislav and Fenogenova, Alena | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.fieldmatters-1.9/ | Kosyak, Sergey and Tyers, Francis | Proceedings of the first workshop on NLP applications to field linguistics | 77--85 | This paper presents a set of experiments in the area of morphological modelling and prediction. We test whether morphological segmentation can compete against statistical segmentation in the tasks of language modelling and predictive text entry for two under-resourced and indigenous languages, K`iche' and Chukchi. We use different segmentation methods {---} both statistical and morphological {---} to make datasets that are used to train models of different types: single-way segmented, which are trained using data from one segmenter; two-way segmented, which are trained using concatenated data from two segmenters; and finetuned, which are trained on two datasets from different segmenters. We compute word and character level perplexities and find that single-way segmented models trained on morphologically segmented data show the highest performance. Finally, we evaluate the language models on the task of predictive text entry using gold standard data and measure the average number of clicks per character and keystroke savings rate. We find that the models trained on morphologically segmented data show better scores, although with substantial room for improvement. At last, we propose the usage of morphological segmentation in order to improve the end-user experience while using predictive text and we plan on testing this assumption by doing end-user evaluation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,070 |
inproceedings | ferguson-etal-2022-retrieval | Retrieval Data Augmentation Informed by Downstream Question Answering Performance | Aly, Rami and Christodoulopoulos, Christos and Cocarascu, Oana and Guo, Zhijiang and Mittal, Arpit and Schlichtkrull, Michael and Thorne, James and Vlachos, Andreas | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.fever-1.1/ | Ferguson, James and Hajishirzi, Hannaneh and Dasigi, Pradeep and Khot, Tushar | Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER) | 1--5 | Training retrieval models to fetch contexts for Question Answering (QA) over large corpora requires labeling relevant passages in those corpora. Since obtaining exhaustive manual annotations of all relevant passages is not feasible, prior work uses text overlap heuristics to find passages that are likely to contain the answer, but this is not feasible when the task requires deeper reasoning and answers are not extractable spans (e.g.: multi-hop, discrete reasoning). We address this issue by identifying relevant passages based on whether they are useful for a trained QA model to arrive at the correct answers, and develop a search process guided by the QA model`s loss. Our experiments show that this approach enables identifying relevant context for unseen data greater than 90{\%} of the time on the IIRC dataset and generalizes better to the end QA task than those trained on just the gold retrieval data on IIRC and QASC datasets. | null | null | 10.18653/v1/2022.fever-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,072 |
inproceedings | lin-fu-2022-heterogeneous | Heterogeneous-Graph Reasoning and Fine-Grained Aggregation for Fact Checking | Aly, Rami and Christodoulopoulos, Christos and Cocarascu, Oana and Guo, Zhijiang and Mittal, Arpit and Schlichtkrull, Michael and Thorne, James and Vlachos, Andreas | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.fever-1.2/ | Lin, Hongbin and Fu, Xianghua | Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER) | 6--15 | Fact checking is a challenging task that requires corresponding evidences to verify the property of a claim based on reasoning. Previous studies generally i) construct the graph by treating each evidence-claim pair as node which is a simple way that ignores to exploit their implicit interaction, or building a fully-connected graph among claim and evidences where the entailment relationship between claim and evidence would be considered equal to the semantic relationship among evidences; ii) aggregate evidences equally without considering their different stances towards the verification of fact. Towards the above issues, we propose a novel heterogeneous-graph reasoning and fine-grained aggregation model, with two following modules: 1) a heterogeneous graph attention network module to distinguish different types of relationships within the constructed graph; 2) fine-grained aggregation module which learns the implicit stance of evidences towards the prediction result in details. Extensive experiments on the benchmark dataset demonstrate that our proposed model achieves much better performance than state-of-the-art methods. | null | null | 10.18653/v1/2022.fever-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,073 |
inproceedings | huang-etal-2022-distilling | Distilling Salient Reviews with Zero Labels | Aly, Rami and Christodoulopoulos, Christos and Cocarascu, Oana and Guo, Zhijiang and Mittal, Arpit and Schlichtkrull, Michael and Thorne, James and Vlachos, Andreas | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.fever-1.3/ | Huang, Chieh-Yang and Li, Jinfeng and Bhutani, Nikita and Whedon, Alexander and Hruschka, Estevam and Suhara, Yoshi | Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER) | 16--28 | Many people read online reviews to learn about real-world entities of their interest. However, majority of reviews only describes general experiences and opinions of the customers, and may not reveal facts that are specific to the entity being reviewed. In this work, we focus on a novel task of mining from a review corpus sentences that are unique for each entity. We refer to this task as Salient Fact Extraction. Salient facts are extremely scarce due to their very nature. Consequently, collecting labeled examples for training supervised models is tedious and cost-prohibitive. To alleviate this scarcity problem, we develop an unsupervised method, ZL-Distiller, which leverages contextual language representations of the reviews and their distributional patterns to identify salient sentences about entities. Our experiments on multiple domains (hotels, products, and restaurants) show that ZL-Distiller achieves state-of-the-art performance and further boosts the performance of other supervised/unsupervised algorithms for the task. Furthermore, we show that salient sentences mined by ZL-Distiller provide unique and detailed information about entities, which benefit downstream NLP applications including question answering and summarization. | null | null | 10.18653/v1/2022.fever-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,074 |
inproceedings | kelk-etal-2022-automatic | Automatic Fake News Detection: Are current models {\textquotedblleft}fact-checking{\textquotedblright} or{\textquotedblleft}gut-checking{\textquotedblright}? | Aly, Rami and Christodoulopoulos, Christos and Cocarascu, Oana and Guo, Zhijiang and Mittal, Arpit and Schlichtkrull, Michael and Thorne, James and Vlachos, Andreas | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.fever-1.4/ | Kelk, Ian and Basseri, Benjamin and Lee, Wee and Qiu, Richard and Tanner, Chris | Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER) | 29--36 | Automatic fake news detection models are ostensibly based on logic, where the truth of a claim made in a headline can be determined by supporting or refuting evidence found in a resulting web query. These models are believed to be reasoning in some way; however, it has been shown that these same results, or better, can be achieved without considering the claim at all {--} only the evidence. This implies that other signals are contained within the examined evidence, and could be based on manipulable factors such as emotion, sentiment, or part-of-speech (POS) frequencies, which are vulnerable to adversarial inputs. We neutralize some of these signals through multiple forms of both neural and non-neural pre-processing and style transfer, and find that this flattening of extraneous indicators can induce the models to actually require both claims and evidence to perform well. We conclude with the construction of a model using emotion vectors built off a lexicon and passed through an {\textquotedblleft}emotional attention{\textquotedblright} mechanism to appropriately weight certain emotions. We provide quantifiable results that prove our hypothesis that manipulable features are being used for fact-checking. | null | null | 10.18653/v1/2022.fever-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,075 |
inproceedings | calvo-figueras-etal-2022-semantics | A Semantics-Aware Approach to Automated Claim Verification | Aly, Rami and Christodoulopoulos, Christos and Cocarascu, Oana and Guo, Zhijiang and Mittal, Arpit and Schlichtkrull, Michael and Thorne, James and Vlachos, Andreas | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.fever-1.5/ | Calvo Figueras, Blanca and Cuadros, Montse and Agerri, Rodrigo | Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER) | 37--48 | The influence of fake news in the perception of reality has become a mainstream topic in the last years due to the fast propagation of misleading information. In order to help in the fight against misinformation, automated solutions to fact-checking are being actively developed within the research community. In this context, the task of Automated Claim Verification is defined as assessing the truthfulness of a claim by finding evidence about its veracity. In this work we empirically demonstrate that enriching a BERT model with explicit semantic information such as Semantic Role Labelling helps to improve results in claim verification as proposed by the FEVER benchmark. Furthermore, we perform a number of explainability tests that suggest that the semantically-enriched model is better at handling complex cases, such as those including passive forms or multiple propositions. | null | null | 10.18653/v1/2022.fever-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,076 |
inproceedings | dougrez-lewis-etal-2022-phemeplus | {PHEMEP}lus: Enriching Social Media Rumour Verification with External Evidence | Aly, Rami and Christodoulopoulos, Christos and Cocarascu, Oana and Guo, Zhijiang and Mittal, Arpit and Schlichtkrull, Michael and Thorne, James and Vlachos, Andreas | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.fever-1.6/ | Dougrez-Lewis, John and Kochkina, Elena and Arana-Catania, Miguel and Liakata, Maria and He, Yulan | Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER) | 49--58 | Work on social media rumour verification utilises signals from posts, their propagation and users involved. Other lines of work target identifying and fact-checking claims based on information from Wikipedia, or trustworthy news articles without considering social media context. However works combining the information from social media with external evidence from the wider web are lacking. To facilitate research in this direction, we release a novel dataset, PHEMEPlus, an extension of the PHEME benchmark, which contains social media conversations as well as relevant external evidence for each rumour. We demonstrate the effectiveness of incorporating such evidence in improving rumour verification models. Additionally, as part of the evidence collection, we evaluate various ways of query formulation to identify the most effective method. | null | null | 10.18653/v1/2022.fever-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,077 |
inproceedings | minhas-etal-2022-xinfotabs | {XI}nfo{T}ab{S}: Evaluating Multilingual Tabular Natural Language Inference | Aly, Rami and Christodoulopoulos, Christos and Cocarascu, Oana and Guo, Zhijiang and Mittal, Arpit and Schlichtkrull, Michael and Thorne, James and Vlachos, Andreas | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.fever-1.7/ | Minhas, Bhavnick and Shankhdhar, Anant and Gupta, Vivek and Aggarwal, Divyanshu and Zhang, Shuo | Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER) | 59--77 | The ability to reason about tabular or semi-structured knowledge is a fundamental problem for today`s Natural Language Processing (NLP) systems. While significant progress has been achieved in the direction of tabular reasoning, these advances are limited to English due to the absence of multilingual benchmark datasets for semi-structured data. In this paper, we use machine translation methods to construct a multilingual tabular NLI dataset, namely XINFOTABS, which expands the English tabular NLI dataset of INFOTABS to ten diverse languages. We also present several baselines for multilingual tabular reasoning, e.g., machine translation-based methods and cross-lingual. We discover that the XINFOTABS evaluation suite is both practical and challenging. As a result, this dataset will contribute to increased linguistic inclusion in tabular reasoning research and applications. | null | null | 10.18653/v1/2022.fever-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,078 |
inproceedings | mori-etal-2022-neural | Neural Machine Translation for Fact-checking Temporal Claims | Aly, Rami and Christodoulopoulos, Christos and Cocarascu, Oana and Guo, Zhijiang and Mittal, Arpit and Schlichtkrull, Michael and Thorne, James and Vlachos, Andreas | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.fever-1.8/ | Mori, Marco and Papotti, Paolo and Bellomarini, Luigi and Giudice, Oliver | Proceedings of the Fifth Fact Extraction and VERification Workshop (FEVER) | 78--82 | Computational fact-checking aims at supporting the verification process of textual claims by exploiting trustworthy sources. However, there are large classes of complex claims that cannot be automatically verified, for instance those related to temporal reasoning. To this aim, in this work, we focus on the verification of economic claims against time series sources. Starting from given textual claims in natural language, we propose a neural machine translation approach to produce respective queries expressed in a recently proposed temporal fragment of the Datalog language. The adopted deep neural approach shows promising preliminary results for the translation of 10 categories of claims extracted from real use cases. | null | null | 10.18653/v1/2022.fever-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,079 |
inproceedings | lyu-etal-2022-mllabs | {MLL}abs-{LIG} at {T}empo{W}i{C} 2022: A Generative Approach for Examining Temporal Meaning Shift | Barbieri, Francesco and Camacho-Collados, Jose and Dhingra, Bhuwan and Espinosa-Anke, Luis and Gribovskaya, Elena and Lazaridou, Angeliki and Loureiro, Daniel and Neves, Leonardo | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.evonlp-1.1/ | Lyu, Chenyang and Zhou, Yongxin and Ji, Tianbo | Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP) | 1--6 | In this paper, we present our system for the EvoNLP 2022 shared task Temporal Meaning Shift (TempoWiC). Different from the typically used discriminative model, we propose a generative approach based on pre-trained generation models. The basic architecture of our system is a seq2seq model where the input sequence consists of two documents followed by a question asking whether the meaning of target word changed or not, the target output sequence is a declarative sentence describing the meaning of target word changed or not. The experimental results on TempoWiC test set show that our best system (with time information) obtained an accuracy and Marco F-1 score of 68.09{\%} and 62.59{\%} respectively, which ranked 12th among all submitted systems. The results have shown the plausibility of using generation model for WiC tasks, meanwhile also indicate there`s still room for further improvement. | null | null | 10.18653/v1/2022.evonlp-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,081 |
inproceedings | chen-etal-2022-using | Using Deep Mixture-of-Experts to Detect Word Meaning Shift for {T}empo{W}i{C} | Barbieri, Francesco and Camacho-Collados, Jose and Dhingra, Bhuwan and Espinosa-Anke, Luis and Gribovskaya, Elena and Lazaridou, Angeliki and Loureiro, Daniel and Neves, Leonardo | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.evonlp-1.2/ | Chen, Ze and Wang, Kangxu and Cai, Zijian and Zheng, Jiewen and He, Jiarong and Gao, Max and Zhang, Jason | Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP) | 7--11 | This paper mainly describes the dma submission to the TempoWiC task, which achieves a macro-F1 score of 77.05{\%} and attains the first place in this task. We first explore the impact of different pre-trained language models. Then we adopt data cleaning, data augmentation, and adversarial training strategies to enhance the model generalization and robustness. For further improvement, we integrate POS information and word semantic representation using a Mixture-of-Experts (MoE) approach. The experimental results show that MoE can overcome the feature overuse issue and combine the context, POS, and word semantic features well. Additionally, we use a model ensemble method for the final prediction, which has been proven effective by many research works. | null | null | 10.18653/v1/2022.evonlp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,082 |
inproceedings | pirhadi-etal-2022-using | Using Two Losses and Two Datasets Simultaneously to Improve {T}empo{W}i{C} Accuracy | Barbieri, Francesco and Camacho-Collados, Jose and Dhingra, Bhuwan and Espinosa-Anke, Luis and Gribovskaya, Elena and Lazaridou, Angeliki and Loureiro, Daniel and Neves, Leonardo | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.evonlp-1.3/ | Pirhadi, Mohammad Javad and Mirzaei, Motahhare and Eetemadi, Sauleh | Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP) | 12--15 | WSD (Word Sense Disambiguation) is the task of identifying which sense of a word is meant in a sentence or other segment of text. Researchers have worked on this task (e.g. Pustejovsky, 2002) for years but it`s still a challenging one even for SOTA (state-of-the-art) LMs (language models). The new dataset, TempoWiC introduced by Loureiro et al. (2022b) focuses on the fact that words change over time. Their best baseline achieves 70.33{\%} macro-F1. In this work, we use two different losses simultaneously. We also improve our model by using another similar dataset to generalize better. Our best configuration beats their best baseline by 4.23{\%}. | null | null | 10.18653/v1/2022.evonlp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,083 |
inproceedings | paul-etal-2022-class | Class Incremental Learning for Intent Classification with Limited or No Old Data | Barbieri, Francesco and Camacho-Collados, Jose and Dhingra, Bhuwan and Espinosa-Anke, Luis and Gribovskaya, Elena and Lazaridou, Angeliki and Loureiro, Daniel and Neves, Leonardo | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.evonlp-1.4/ | Paul, Debjit and Sorokin, Daniil and Gaspers, Judith | Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP) | 16--25 | In this paper, we explore class-incremental learning for intent classification (IC) in a setting with limited old data available. IC is the task of mapping user utterances to their corresponding intents. Even though class-incremental learning without storing the old data yields high potential of reducing human and computational resources in industry NLP model releases, to the best of our knowledge, it hasn`t been studied for NLP classification tasks in the literature before. In this work, we compare several contemporary class-incremental learning methods, i.e., BERT warm start, L2, Elastic Weight Consolidation, RecAdam and Knowledge Distillation within two realistic class-incremental learning scenarios: one where only the previous model is assumed to be available, but no data corresponding to old classes, and one in which limited unlabeled data for old classes is assumed to be available. Our results indicate that among the investigated continual learning methods, Knowledge Distillation worked best for our class-incremental learning tasks, and adding limited unlabeled data helps the model in both adaptability and stability. | null | null | 10.18653/v1/2022.evonlp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,084 |
inproceedings | goschenhofer-etal-2022-cc | {CC}-Top: Constrained Clustering for Dynamic Topic Discovery | Barbieri, Francesco and Camacho-Collados, Jose and Dhingra, Bhuwan and Espinosa-Anke, Luis and Gribovskaya, Elena and Lazaridou, Angeliki and Loureiro, Daniel and Neves, Leonardo | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.evonlp-1.5/ | Goschenhofer, Jann and Ragupathy, Pranav and Heumann, Christian and Bischl, Bernd and A{\ss}enmacher, Matthias | Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP) | 26--34 | Research on multi-class text classification of short texts mainly focuses on supervised (transfer) learning approaches, requiring a finite set of pre-defined classes which is constant over time. This work explores deep constrained clustering (CC) as an alternative to supervised learning approaches in a setting with a dynamically changing number of classes, a task we introduce as dynamic topic discovery (DTD).We do so by using pairwise similarity constraints instead of instance-level class labels which allow for a flexible number of classes while exhibiting a competitive performance compared to supervised approaches. First, we substantiate this through a series of experiments and show that CC algorithms exhibit a predictive performance similar to state-of-the-art supervised learning algorithms while requiring less annotation effort. Second, we demonstrate the overclustering capabilities of deep CC for detecting topics in short text data sets in the absence of the ground truth class cardinality during model training. Third, we showcase that these capabilities can be leveraged for the DTD setting as a step towards dynamic learning over time and finally, we release our codebase to nurture further research in this area. | null | null | 10.18653/v1/2022.evonlp-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,085 |
inproceedings | tukhtina-etal-2022-hse | {HSE} at {T}empo{W}i{C}: Detecting Meaning Shift in Social Media with Diachronic Language Models | Barbieri, Francesco and Camacho-Collados, Jose and Dhingra, Bhuwan and Espinosa-Anke, Luis and Gribovskaya, Elena and Lazaridou, Angeliki and Loureiro, Daniel and Neves, Leonardo | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.evonlp-1.6/ | Tukhtina, Elizaveta and Kashleva, Kseniia and Vydrina, Svetlana | Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP) | 35--38 | This paper describes our methods for temporal meaning shift detection, implemented during the TempoWiC shared task. We present two systems: with and without time span data usage. Our approaches are based on the language models fine-tuned for Twitter domain. Both systems outperformed all the competition`s baselines except TimeLMs-SIM. Our best submission achieved the macro-F1 score of 70.09{\%} and took the 7th place. This result was achieved by using diachronic language models from the TimeLMs project. | null | null | 10.18653/v1/2022.evonlp-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,086 |
inproceedings | mcgillivray-etal-2022-leveraging | Leveraging time-dependent lexical features for offensive language detection | Barbieri, Francesco and Camacho-Collados, Jose and Dhingra, Bhuwan and Espinosa-Anke, Luis and Gribovskaya, Elena and Lazaridou, Angeliki and Loureiro, Daniel and Neves, Leonardo | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.evonlp-1.7/ | McGillivray, Barbara and Alahapperuma, Malithi and Cook, Jonathan and Di Bonaventura, Chiara and Mero{\~n}o-Pe{\~n}uela, Albert and Tyson, Gareth and Wilson, Steven | Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP) | 39--54 | We present a study on the integration of time-sensitive information in lexicon-based offensive language detection systems. Our focus is on Offenseval sub-task A, aimed at detecting offensive tweets. We apply a semantic change detection algorithm over a short time span of two years to detect words whose semantics has changed and we focus particularly on those words that acquired or lost an offensive meaning between 2019 and 2020. Using the output of this semantic change detection approach, we train an SVM classifier on the Offenseval 2019 training set. We build on the already competitive SINAI system submitted to Offenseval 2019 by adding new lexical features, including those that capture the change in usage of words and their association with emerging offensive usages. We discuss the challenges, opportunities and limitations of integrating semantic change detection in offensive language detection models. Our work draws attention to an often neglected aspect of offensive language, namely that the meanings of words are constantly evolving and that NLP systems that account for this change can achieve good performance even when not trained on the most recent training data. | null | null | 10.18653/v1/2022.evonlp-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,087 |
inproceedings | godbole-etal-2022-temporal | Temporal Word Meaning Disambiguation using {T}ime{LM}s | Barbieri, Francesco and Camacho-Collados, Jose and Dhingra, Bhuwan and Espinosa-Anke, Luis and Gribovskaya, Elena and Lazaridou, Angeliki and Loureiro, Daniel and Neves, Leonardo | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.evonlp-1.8/ | Godbole, Mihir and Dandavate, Parth and Kane, Aditya | Proceedings of the First Workshop on Ever Evolving NLP (EvoNLP) | 55--60 | Meaning of words constantly change given the events in modern civilization. Large Language Models use word embeddings, which are often static and thus cannot cope with this semantic change. Thus, it is important to resolve ambiguity in word meanings. This paper is an effort in this direction, where we explore methods for word sense disambiguation for the EvoNLP shared task. We conduct rigorous ablations for two solutions to this problem. We see that an approach using time-aware language models helps this task. Furthermore, we explore possible future directions to this problem. | null | null | 10.18653/v1/2022.evonlp-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,088 |
inproceedings | faggionato-etal-2022-nlp | {NLP} Pipeline for Annotating (Endangered) {T}ibetan and Newar Varieties | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.1/ | Faggionato, Christian and Hill, Nathan and Meelen, Marieke | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 1--6 | In this paper we present our work-in-progress on a fully-implemented pipeline to create deeply-annotated corpora of a number of historical and contemporary Tibetan and Newar varieties. Our off-the-shelf tools allow researchers to create corpora with five different layers of annotation, ranging from morphosyntactic to information-structural annotation. We build on and optimise existing tools (in line with FAIR principles), as well as develop new ones, and show how they can be adapted to other Tibetan and Newar languages, most notably modern endangered languages that are both extremely low-resourced and under-researched. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,102 |
inproceedings | tittel-2022-towards | Towards an Ontology for Toponyms in Nepalese Historical Documents | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.2/ | Tittel, Sabine | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 7--16 | Nepalese historical legal documents contain a plethora of valuable information on the history of what is today Nepal. An empirical study based on such documents enables a deep understanding of religion and ritual, legal practice, rulership, and many other aspects of the society through time. The aim of the research project {\textquoteleft}Documents on the History of Religion and Law of Pre-modern Nepal' is to make accessible a text corpus with 18 th to 20 th century documents both through cataloging and digital text editions, building a database called Documenta Nepalica. However, the lack of interoperability with other resources hampers its seamless integration into broader research contexts. To address this problem, we target the modeling of the Documenta Nepalica as Linked Data. This paper presents one module of this larger endeavour: It describes a proof of concept for an ontology for Nepalese toponyms that provides the means to classify toponyms attested in the documents and to model their entanglement with other toponyms, persons, events, and time. The ontology integrates and extends standard ontologies and increases interoperability through aligning the ontology individuals to the respective entries of geographic authority files such as GeoNames. Also, we establish a mapping of the individuals to DBpedia entities. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,103 |
inproceedings | leinonen-etal-2022-semiautomatic | Semiautomatic Speech Alignment for Under-Resourced Languages | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.3/ | Leinonen, Juho and Partanen, Niko and Virpioja, Sami and Kurimo, Mikko | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 17--21 | Cross-language forced alignment is a solution for linguists who create speech corpora for very low-resource languages. However, cross-language is an additional challenge making a complex task, forced alignment, even more difficult. We study how linguists can impart domain expertise to the tasks to increase the performance of automatic forced aligners while keeping the time effort still lower than with manual forced alignment. First, we show that speech recognizers have a clear bias in starting the word later than a human annotator, which results in micro-pauses in the results that do not exist in manual alignments, and study which is the best way to automatically remove these silences. Second, we ask the linguists to simplify the task by splitting long interview audios into shorter lengths by providing some manually aligned segments and evaluating the results of this process. We also study how correlated source language performance is to target language performance, since often it is an easier task to find a better source model than to adapt to the target language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,104 |
inproceedings | lazarenko-riaposov-2022-digitize | How to Digitize Completely: Interactive Geovizualization of a Sketch Map from the Kuzmina Archive | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.4/ | Lazarenko, Elena and Riaposov, Aleksandr | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 22--27 | This paper discusses work in progress on the digitization of a sketch map of the Taz River basin {--} a region that is lacking highly detailed open-source cartography data. The original sketch is retrieved from the archive of Selkup materials gathered by Angelina Ivanovna Kuzmina in the 1960s and 1970s. The data quality and challenges that come with it are evaluated and a task-specific workflow is designed. The process of the turning a series of hand-drawn images with non-structured geographical and linguistic data into an interactive, geographically precise digital map is described both from linguistic and technical perspectives. Furthermore, the map objects in focus are differentiated based on the geographical type of the object and the etymology of the name. This provides an insight into the peculiarities of the linguistic history of the region and contributes to the cartography of the Uralic languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,105 |
inproceedings | maier-etal-2022-word | Word Class Based Language Modeling: A Case of {U}pper {S}orbian | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.5/ | Maier, Isidor and Kuhn, Johannes and Duckhorn, Frank and Kraljevski, Ivan and Sobe, Daniel and Wolff, Matthias and Tsch{\"ope, Constanze | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 28--35 | In this paper we show how word class based language modeling can support the integration of a small language in modern applications of speech technology. The methods described in this paper can be applied for any language. We demonstrate the methods on Upper Sorbian. The word classes model the semantic expressions of numerals, date and time of day. The implementation of the created grammars was realized in the form of finite-state-transducers (FSTs) and minimalists grammars (MGs). We practically demonstrate the usage of the FSTs in a simple smart-home speech application, that is able to set wake-up alarms and appointments expressed in a variety of spontaneous and natural sentences. While the created MGs are not integrated in an application for practical use yet, they provide evidence that MGs could potentially work more efficient than FSTs in built-on applications. In particular, MGs can work with a significantly smaller lexicon size, since their more complex structure lets them generate more expressions with less items, while still avoiding wrong expressions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,106 |
inproceedings | riaposov-etal-2022-bringing | Bringing Together Version Control and Quality Assurance of Language Data with {LAMA} | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.6/ | Riaposov, Aleksandr and Lazarenko, Elena and Lehmberg, Timm | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 36--41 | This contribution reports on work in process on project specific software and digital infrastructure components used along with corpus curation workflows in the the framework of the long-term language documentation project INEL. By bringing together scientists with different levels of technical affinity in a highly interdisciplinary working environment, the project is confronted with numerous workflow related issues. Many of them result from collaborative (remote-)work on digital corpora, which, among other things, include annotation, glossing but also quality- and consistency control. In this context several steps were taken to bridge the gap between usability and the requirements of complex data curation workflows. Components of the latter such as a versioning system and semi-automated data validators on one side meet the user demands for the simplicity and minimalism on the other side. Embodying a simple shell script in an interactive graphic user interface, we augment the efficacy of the data versioning and the integration of Java-based quality control and validation tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,107 |
inproceedings | kratochvil-etal-2022-automatic | Automatic Verb Classifier for {A}bui ({AVC}-abz) | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.7/ | Kratochvil, Frantisek and Saad, George and Vomlel, Ji{\v{r}}{\'i} and Kratochv{\'i}l, V{\'a}clav | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 42--50 | We present an automatic verb classifier system that identifies inflectional classes in Abui (AVC-abz), a Papuan language of the Timor-Alor-Pantar family. The system combines manually annotated language data (the learning set) with the output of a morphological precision grammar (corpus data). The morphological precision grammar is trained on a fully glossed smaller corpus and applied to a larger corpus. Using the k-means algorithm, the system clusters inflectional classes discovered in the learning set. In the second step, Naive Bayes algorithm assigns the verbs found in the corpus data to the best-fitting cluster. AVC-abz serves to advance and refine the grammatical analysis of Abui as well as to monitor corpus coverage and its gradual improvement. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,108 |
inproceedings | sucameli-etal-2022-dialogue | Dialogue Act and Slot Recognition in {I}talian Complex Dialogues | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.8/ | Sucameli, Irene and De Quattro, Michele and Eshghi, Arash and Suglia, Alessandro and Simi, Maria | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 51--60 | Since the advent of Transformer-based, pretrained language models (LM) such as BERT, Natural Language Understanding (NLU) components in the form of Dialogue Act Recognition (DAR) and Slot Recognition (SR) for dialogue systems have become both more accurate and easier to create for specific application domains. Unsurprisingly however, much of this progress has been limited to the English language, due to the existence of very large datasets in both dialogue and written form, while only few corpora are available for lower resourced languages like Italian. In this paper, we present JILDA 2.0, an enhanced version of a Italian task-oriented dialogue dataset, using it to realise a Italian NLU baseline by evaluating three of the most recent pretrained LMs: Italian BERT, Multilingual BERT, and AlBERTo for the DAR and SR tasks. Thus, this paper not only presents an updated version of a dataset characterised by complex dialogues, but it also highlights the challenges that still remain in creating effective NLU components for lower resourced languages, constituting a first step in improving NLU for Italian dialogue. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,109 |
inproceedings | makarov-etal-2022-digital | Digital Resources for the {S}hughni Language | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.9/ | Makarov, Yury and Melenchenko, Maksim and Novokshanov, Dmitry | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 61--64 | This paper describes the Shughni Documentation Project consisting of the Online Shughni Dictionary, morphological analyzer, orthography converter, and Shughni corpus. The online dictionary has not only basic functions such as finding words but also facilitates more complex tasks. Representing a lexeme as a network of database sections makes it possible to search in particular domains (e.g., in meanings only), and the system of labels facilitates conditional search queries. Apart from this, users can make search queries and view entries in different orthographies of the Shughni language and send feedback in case they spot mistakes. Editors can add, modify, or delete entries without programming skills via an intuitive interface. In future, such website architecture can be applied to creating a lexical database of Iranian languages. The morphological analyzer performs automatic analysis of Shughni texts, which is useful for linguistic research and documentation. Once the analysis is complete, homonymy resolution must be conducted so that the annotated texts are ready to be uploaded to the corpus. The analyzer makes use of the orthographic converter, which helps to tackle the problem of spelling variability in Shughni, a language with no standard literary tradition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,110 |
inproceedings | misganaw-roller-2022-german | {G}erman Dialect Identification and Mapping for Preservation and Recovery | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.10/ | Misganaw, Aynalem Tesfaye and Roller, Sabine | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 65--69 | Many linguistic projects which focus on dialects do collection of audio data, analysis, and linguistic interpretation on the data. The outcomes of such projects are good language resources because dialects are among less-resources languages as most of them are oral traditions. Our project Dialektatlas Mittleres Westdeutschland (DMW) 1 focuses on the study of German language varieties through collection of audio data of words and phrases which are selected by linguistic experts based on the linguistic significance of the words (and phrases) to distinguish dialects among each other. We used a total of 7,814 audio snippets of the words and phrases of eight different dialects from middle west Germany. We employed a multilabel classification approach to address the problem of dialect mapping using Support Vector Machine (SVM) algorithm. The experimental result showed a promising accuracy of 87{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,111 |
inproceedings | jamal-etal-2022-exploring | Exploring Transfer Learning for {U}rdu Speech Synthesis | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.11/ | Jamal, Sahar and Abdul Rauf, Sadaf and Majid, Quratulain | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 70--74 | Neural methods in Text to Speech synthesis (TTS) have demonstrated momentous advancement in terms of the naturalness and intelligibility of the synthesized speech. In this paper we present neural speech synthesis system for Urdu language, a low resource language. The main challenge faced for this study was the non-availability of any publicly available Urdu speech synthesis corpora. Urdu speech corpus was created using audio books and synthetic speech generation. To leverage the low resource scenario we adopted transfer learning for our experiments where knowledge extracted is further used to train the model using a relatively smaller Urdu training data set. The results from this model show satisfactory results, though a good margin for improvement exists and we are working to improve it further. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,112 |
inproceedings | bhattacharyya-jana-2022-towards | Towards {B}engali {W}ord{N}et Enrichment using Knowledge Graph Completion Techniques | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.12/ | Bhattacharyya, Sree and Jana, Abhik | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 75--80 | WordNet serves as a very essential knowledge source for various downstream Natural Language Processing (NLP) tasks. Since this is a human-curated resource, building such a resource is very cumbersome and time-consuming. Even though for languages like English, the existing WordNet is reasonably rich in terms of coverage, for resource-poor languages like Bengali, the WordNet is far from being reasonably sufficient in terms of coverage of vocabulary and relations between them. In this paper, we investigate the usefulness of some of the existing knowledge graph completion algorithms to enrich Bengali WordNet automatically. We explore three such techniques namely DistMult, ComplEx, and HolE, and analyze their effectiveness for adding more relations between existing nodes in the WordNet. We achieve maximum Hits@1 of 0.412 and Hits@10 of 0.703, which look very promising for low resource languages like Bengali. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,113 |
inproceedings | awale-jana-2022-enriching | Enriching {H}indi {W}ord{N}et Using Knowledge Graph Completion Approach | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.13/ | Awale, Sushil and Jana, Abhik | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 81--85 | Even though the use of WordNet in the Natural Language Processing domain is unquestionable, creating and maintaining WordNet is a cumbersome job and it is even difficult for low resource languages like Hindi. In this study, we aim to enrich the Hindi WordNet automatically by using state-of-the-art knowledge graph completion (KGC) approaches. We pose the automatic Hindi WordNet enrichment problem as a knowledge graph completion task and therefore we modify the WordNet structure to make it appropriate for applying KGC approaches. Second, we attempt five KGC approaches of three different genres and compare the performances for the task. Our study shows that ConvE is the best KGC methodology for this specific task compared to other KGC approaches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,114 |
inproceedings | ahltorp-etal-2022-digital | A Digital {S}wedish-{Y}iddish/{Y}iddish-{S}wedish Dictionary: A Web-Based Dictionary that is also Available Offline | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.14/ | Ahltorp, Magnus and Hessel, Jean and Eriksson, Gunnar and Skeppstedt, Maria and Domeij, Rickard | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 86--87 | Yiddish is one of the national minority languages of Sweden, and one of the languages for which the Swedish Institute for Language and Folklore is responsible for developing useful language resources. We here describe the web-based version of a Swedish-Yiddish/Yiddish-Swedish dictionary. The single search field of the web-based dictionary is used for incrementally searching all three components of the dictionary entries (the word in Swedish, the word in Yiddish with Hebrew characters and the transliteration in Latin script). When the user accesses the dictionary in an online mode, the dictionary is saved in the web browser, which makes it possible to also use the dictionary offline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,115 |
inproceedings | wehar-huttenrauch-2022-online | An Online Dictionary for Dialects of {N}orth {F}risian | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.15/ | Wehar, Michael and H{\"uttenrauch, Tanno | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 88--89 | Language is an essential part of communication and culture. Documenting, digitizing, and preserving language is a meaningful pursuit. The first author of this work is a speaker of S{\"ol`ring which is a dialect of the North Frisian language spoken on the island of Sylt in the North Frisia region of Germany. S{\"ol`ring is estimated to have only hundreds of native speakers and very limited online language resources making it a prime candidate for language preservation initiatives. To help preserve S{\"ol`ring and provide resources for S{\"ol`ring speakers and learners, we built an online dictionary. Our dictionary, called friisk.org, provides translations for over 28,000 common German words to S{\"ol`ring. In addition, our dictionary supports translations for S{\"ol`ring to German, spell checking for S{\"ol`ring, conjugations for common S{\"ol`ring verbs, and an experimental transcriber from S{\"ol`ring to IPA for pronunciations. Following the release of our online dictionary, we collaborated with neighboring communities to add limited support for additional North Frisian dialects including Fering, Halligen Frisian, Karrharder, Nordergoesharder, {\"O{\"omrang, and Wiedingharder. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,116 |
inproceedings | singh-etal-2022-towards | Towards a Unified Tool for the Management of Data and Technologies in Field Linguistics and Computational Linguistics - {L}i{FE} | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.16/ | Singh, Siddharth and Kumar, Ritesh and Ratan, Shyam and Sinha, Sonal | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 90--94 | The paper presents a new software - Linguistic Field Data Management and Analysis System - LiFE for endangered and low-resourced languages - an open-source, web-based linguistic data analysis and management application allowing systematic storage, management, usage and sharing of linguistic data collected from the field. The application enables users to store lexical items, sentences, paragraphs, audio-visual content including photographs, video clips, speech recordings, etc, with rich glossing and annotation. For field linguists, it provides facilities to generate interactive and print dictionaries; for NLP practitioners, it provides the data storage and representation in standard formats such as RDF, JSON and CSV. The tool provides a one-click interface to train NLP models for various tasks using the data stored in the system and then use it for assistance in further storage of the data (especially for the field linguists). At the same time, the tool also provides the facility of using the models trained outside of the tool for data storage, transcription, annotation and other tasks. The web-based application, allows for seamless collaboration among multiple persons and sharing the data, models, etc with each other. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,117 |
inproceedings | taguchi-etal-2022-universal | {U}niversal {D}ependencies Treebank for {T}atar: Incorporating Intra-Word Code-Switching Information | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.17/ | Taguchi, Chihiro and Iwata, Sei and Watanabe, Taro | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 95--104 | This paper introduces a new Universal Dependencies treebank for the Tatar language named NMCTT. A significant feature of the corpus is that it includes code-switching (CS) information at a morpheme level, given the fact that Tatar texts contain intra-word CS between Tatar and Russian. We first outline NMCTT with a focus on differences from other treebanks of Turkic languages. Then, to evaluate the merit of the CS annotation, this study concisely reports the results of a language identification task implemented with Conditional Random Fields that considers POS tag information, which is readily available in treebanks in the CoNLL-U format. Experimenting on NMCTT and the Turkish-German CS treebank (SAGT), we demonstrate that the proposed annotation scheme introduced in NMCTT can improve the performance of the subword-level language identification. This annotation scheme for CS is not only universally applicable to languages with CS, but also shows a possibility to employ morphosyntactic information for CS-related downstream tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,118 |
inproceedings | oktem-etal-2022-preparing | Preparing an endangered language for the digital age: The Case of {J}udeo-{S}panish | Ojha, Atul Kr. and Ahmadi, Sina and Liu, Chao-Hong and McCrae, John P. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.eurali-1.18/ | {\"Oktem, Alp and Zevallos, Rodolfo and Moslem, Yasmin and {\"Ozt{\"urk, {\"Ozg{\"ur G{\"une{\c{s and Gerson {\c{Sarhon, Karen | Proceedings of the Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia within the 13th Language Resources and Evaluation Conference | 105--110 | We develop machine translation and speech synthesis systems to complement the efforts of revitalizing Judeo-Spanish, the exiled language of Sephardic Jews, which survived for centuries, but now faces the threat of extinction in the digital age. Building on resources created by the Sephardic community of Turkey and elsewhere, we create corpora and tools that would help preserve this language for future generations. For machine translation, we first develop a Spanish to Judeo-Spanish rule-based machine translation system, in order to generate large volumes of synthetic parallel data in the relevant language pairs: Turkish, English and Spanish. Then, we train baseline neural machine translation engines using this synthetic data and authentic parallel data created from translations by the Sephardic community. For text-to-speech synthesis, we present a 3.5-hour single speaker speech corpus for building a neural speech synthesis engine. Resources, model weights and online inference engines are shared publicly. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,119 |
inproceedings | reelfs-etal-2022-interpreting | Interpreting Emoji with Emoji | Wijeratne, Sanjaya and Lee, Jennifer and Saggion, Horacio and Sheth, Amit | jul | 2022 | Seattle, Washington, USA | Association for Computational Linguistics | https://aclanthology.org/2022.emoji-1.1/ | Reelfs, Jens and Mohaupt, Timon and Sikdar, Sandipan and Strohmaier, Markus and Hohlfeld, Oliver | Proceedings of the Fifth International Workshop on Emoji Understanding and Applications in Social Media | 1--10 | We study the extent to which emoji can be used to add interpretability to embeddings of text and emoji. To do so, we extend the POLAR-framework that transforms word embeddings to interpretable counterparts and apply it to word-emoji embeddings trained on four years of messaging data from the Jodel social network. We devise a crowdsourced human judgement experiment to study six usecases, evaluating against words only, what role emoji can play in adding interpretability to word embeddings. That is, we use a revised POLAR approach interpreting words and emoji with words, emoji or both according to human judgement. We find statistically significant trends demonstrating that emoji can be used to interpret other emoji very well. | null | null | 10.18653/v1/2022.emoji-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,121 |
inproceedings | meloni-etal-2022-beyond | Beyond emojis: an insight into the {IKON} language | Wijeratne, Sanjaya and Lee, Jennifer and Saggion, Horacio and Sheth, Amit | jul | 2022 | Seattle, Washington, USA | Association for Computational Linguistics | https://aclanthology.org/2022.emoji-1.2/ | Meloni, Laura and Hitmeangsong, Phimolporn and Appelhaus, Bernhard and Walthert, Edgar and Reale, Cesco | Proceedings of the Fifth International Workshop on Emoji Understanding and Applications in Social Media | 11--20 | This paper presents a new iconic language, the IKON language, and its philosophical, linguistic, and graphical principles. We examine some case studies to highlight the semantic complexity of the visual representation of meanings. We also introduce the Iconometer test to validate our icons and their application to the medical domain, through the creation of iconic sentences. | null | null | 10.18653/v1/2022.emoji-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,122 |
inproceedings | weissman-2022-emoji | Emoji semantics/pragmatics: investigating commitment and lying | Wijeratne, Sanjaya and Lee, Jennifer and Saggion, Horacio and Sheth, Amit | jul | 2022 | Seattle, Washington, USA | Association for Computational Linguistics | https://aclanthology.org/2022.emoji-1.3/ | Weissman, Benjamin | Proceedings of the Fifth International Workshop on Emoji Understanding and Applications in Social Media | 21--28 | This paper presents the results of two experiments investigating the directness of emoji in constituting speaker meaning. This relationship is examined in two ways, with Experiment 1 testing whether speakers are committed to meanings they communicate via a single emoji and Experiment 2 testing whether that speaker is taken to have lied if that meaning is false and intended to deceive. Results indicate that emoji with high meaning agreement in general (i.e., pictorial representations of concrete objects or foods) reliably commit the speaker to that meaning and can constitute lying. Expressive emoji representing facial expressions and emotional states demonstrate a range of commitment and lie ratings: those with high meaning agreement constitute more commitment and more of a lie than those with less meaning agreement in the first place. Emoji can constitute speaker commitment and they can be lies, but this result does not apply uniformly to all emoji and is instead tied to agreement, conventionality, and lexicalization. | null | null | 10.18653/v1/2022.emoji-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,123 |
inproceedings | grover-banati-2022-understanding | Understanding the Sarcastic Nature of Emojis with {S}arc{O}ji | Wijeratne, Sanjaya and Lee, Jennifer and Saggion, Horacio and Sheth, Amit | jul | 2022 | Seattle, Washington, USA | Association for Computational Linguistics | https://aclanthology.org/2022.emoji-1.4/ | Grover, Vandita and Banati, Hema | Proceedings of the Fifth International Workshop on Emoji Understanding and Applications in Social Media | 29--39 | Identifying sarcasm is a challenging research problem owing to its highly contextual nature. Several researchers have attempted numerous mechanisms to incorporate context, linguistic aspects, and supervised and semi-supervised techniques to determine sarcasm. It has also been noted that emojis in a text may also hold key indicators of sarcasm. However, the availability of sarcasm datasets with emojis is scarce. This makes it challenging to effectively study the sarcastic nature of emojis. In this work, we present SarcOji which has been compiled from five publicly available sarcasm datasets. SarcOji contains labeled English texts which all have emojis. We also analyze SarcOji to determine if there is an incongruence in the polarity of text and emojis used therein. Further, emojis' usage, occurrences, and positions in the context of sarcasm are also studied in this compiled dataset. With SarcOji we have been able to demonstrate that frequency of occurrence of an emoji and its position are strong indicators of sarcasm. SarcOji dataset is now publicly available with several derived features like sentiment scores of text and emojis, most frequent emoji, and its position in the text. Compilation of the SarcOji dataset is an initial step to enable the study of the role of emojis in communicating sarcasm. SarcOji dataset can also serve as a go-to dataset for various emoji-based sarcasm detection techniques. | null | null | 10.18653/v1/2022.emoji-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,124 |
inproceedings | ge-stadnyk-sa-2022-conducting | Conducting Cross-Cultural Research on {COVID}-19 Memes | Wijeratne, Sanjaya and Lee, Jennifer and Saggion, Horacio and Sheth, Amit | jul | 2022 | Seattle, Washington, USA | Association for Computational Linguistics | https://aclanthology.org/2022.emoji-1.5/ | Ge-Stadnyk, Jing and Sa, Lusha | Proceedings of the Fifth International Workshop on Emoji Understanding and Applications in Social Media | 40--46 | A cross-linguistic study of COVID-19 memes should allow scholars and professionals to gain insight into how people engage in socially and politically important issues and how culture has influenced societal responses to the global pandemic. This preliminary study employs framing analysis to examine and compare issues, actors and stances conveyed by both English and Chinese memes. The overall findings point to divergence in the way individuals communicate pandemic-related issues in English-speaking countries versus China, although a few similarities were also identified. {\textquoteleft}Regulation' is the most common issue addressed by both English and Chinese memes, though the latter does so at a comparatively higher rate. The {\textquoteleft}ordinary people' image within these memes accounts for the largest percentage in both data sets. Although both Chinese and English memes primarily express negative emotions, the former often occurs on an interpersonal level, whereas the latter aims at criticizing society and certain group of people in general. Lastly, this study proposes explanations for these findings in terms of culture and political environment. | null | null | 10.18653/v1/2022.emoji-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,125 |
inproceedings | iarygina-2022-investigating | Investigating the Influence of Users Personality on the Ambiguous Emoji Perception | Wijeratne, Sanjaya and Lee, Jennifer and Saggion, Horacio and Sheth, Amit | jul | 2022 | Seattle, Washington, USA | Association for Computational Linguistics | https://aclanthology.org/2022.emoji-1.6/ | Iarygina, Olga | Proceedings of the Fifth International Workshop on Emoji Understanding and Applications in Social Media | 47--62 | Emojis are an integral part of Internet communication nowadays. Even though, they are supposed to make the text clearer and less dubious, some emojis are ambiguous and can be interpreted in different ways. One of the factors that determine the perception of emojis is the user`s personality. In this work, I conducted an experimental study and investigated how personality traits, measured with a Big Five Inventory (BFI) questionnaire, affect reaction time when interpreting emoji. For a set of emoji, for which there are several possible interpretations, participants had to determine whether the emoji fits the presented context or not. Using regression analysis, I found that conscientiousness and neuroticism significantly predict the reaction time the person needs to decide about the emoji. More conscientious people take longer to resolve ambiguity, while more neurotic people make decisions about ambiguous emoji faster. The knowledge of the relationship between personality and emoji interpretation can lead to effective use of knowledge of people`s characters in personalizing interactive computer systems. | null | null | 10.18653/v1/2022.emoji-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,126 |
inproceedings | christofalos-etal-2022-semantic | Semantic Congruency Facilitates Memory for Emojis | Wijeratne, Sanjaya and Lee, Jennifer and Saggion, Horacio and Sheth, Amit | jul | 2022 | Seattle, Washington, USA | Association for Computational Linguistics | https://aclanthology.org/2022.emoji-1.7/ | Christofalos, Andriana L. and Feldman, Laurie Beth and Sheridan, Heather | Proceedings of the Fifth International Workshop on Emoji Understanding and Applications in Social Media | 63--68 | Emojis can assume different relations with the sentence context in which they occur. While affective elaboration and emoji-word redundancy are frequently investigated in laboratory experiments, the role of emojis in inferential processes has received much less attention. Here, we used an online ratings task and a recognition memory task to investigate whether differences in emoji function within a sentence affect judgments of emoji-text coherence and subsequent recognition accuracy. Emojis that function as synonyms of a target word from the passages were rated as better fitting with the passage (more coherent) than emojis consistent with an inference from the passage, and both types of emojis were rated as more coherent than incongruent (unrelated) emojis. In a recognition test, emojis consistent with the semantic content of passages (synonym and inference emojis) were better recognized than incongruent emojis. Findings of the present study provide corroborating evidence that readers extract semantic information from emojis and then integrate it with surrounding passage content. | null | null | 10.18653/v1/2022.emoji-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,127 |
inproceedings | feng-etal-2022-emojicloud | {E}moji{C}loud: a Tool for Emoji Cloud Visualization | Wijeratne, Sanjaya and Lee, Jennifer and Saggion, Horacio and Sheth, Amit | jul | 2022 | Seattle, Washington, USA | Association for Computational Linguistics | https://aclanthology.org/2022.emoji-1.8/ | Feng, Yunhe and Guo, Cheng and Wen, Bingbing and Sun, Peng and Yue, Yufei and Tao, Dingwen | Proceedings of the Fifth International Workshop on Emoji Understanding and Applications in Social Media | 69--74 | This paper proposes EmojiCloud, an open-source Python-based emoji cloud visualization tool, to generate a quick and straightforward understanding of emojis from the perspective of frequency and importance. EmojiCloud is flexible enough to support diverse drawing shapes, such as rectangles, ellipses, and image masked canvases. We also follow inclusive and personalized design principles to cover the unique emoji designs from seven emoji vendors (e.g., Twitter, Apple, and Windows) and allow users to customize plotted emojis and background colors. We hope EmojiCloud can benefit the whole emoji community due to its flexibility, inclusiveness, and customizability. | null | null | 10.18653/v1/2022.emoji-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,128 |
inproceedings | zhang-etal-2022-graphicon | Graphicon Evolution on the {C}hinese Social Media Platform {B}ili{B}ili | Wijeratne, Sanjaya and Lee, Jennifer and Saggion, Horacio and Sheth, Amit | jul | 2022 | Seattle, Washington, USA | Association for Computational Linguistics | https://aclanthology.org/2022.emoji-1.9/ | Zhang, Yiqiong and Herring, Susan and Gan, Suifu | Proceedings of the Fifth International Workshop on Emoji Understanding and Applications in Social Media | 75--85 | This study examines the evolutionary trajectory of graphicons in a 13-year corpus of comments from BiliBili, a popular Chinese video-sharing platform. Findings show that emoticons (kaomoji) rose and fell in frequency, while emojis and stickers are both presently on the rise. Graphicon distributions differ in comments and replies to comments. There is also a strong correlation between the types of graphicons used in comments and their corresponding replies, suggesting a priming effect. Finally, qualitative analysis of the 10 most-frequent kaomojis, emojis, and stickers reveals a trend for each successive graphicon type to become less about emotion expression and more integrated with platform-specific culture and the Chinese language. These findings lend partial support to claims in the literature about graphicon evolution. | null | null | 10.18653/v1/2022.emoji-1.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,129 |
inproceedings | ye-etal-2022-generative | Generative Knowledge Graph Construction: A Review | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.1/ | Ye, Hongbin and Zhang, Ningyu and Chen, Hui and Chen, Huajun | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 1--17 | Generative Knowledge Graph Construction (KGC) refers to those methods that leverage the sequence-to-sequence framework for building knowledge graphs, which is flexible and can be adapted to widespread tasks. In this study, we summarize the recent compelling progress in generative knowledge graph construction. We present the advantages and weaknesses of each paradigm in terms of different generation targets and provide theoretical insight and empirical analysis. Based on the review, we suggest promising research directions for the future. Our contributions are threefold: (1) We present a detailed, complete taxonomy for the generative KGC methods; (2) We provide a theoretical and empirical analysis of the generative KGC methods; (3) We propose several research directions that can be developed in the future. | null | null | 10.18653/v1/2022.emnlp-main.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,131 |
inproceedings | zheng-etal-2022-cdconv | {CDC}onv: A Benchmark for Contradiction Detection in {C}hinese Conversations | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.2/ | Zheng, Chujie and Zhou, Jinfeng and Zheng, Yinhe and Peng, Libiao and Guo, Zhen and Wu, Wenquan and Niu, Zheng-Yu and Wu, Hua and Huang, Minlie | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 18--29 | Dialogue contradiction is a critical issue in open-domain dialogue systems. The contextualization nature of conversations makes dialogue contradiction detection rather challenging. In this work, we propose a benchmark for Contradiction Detection in Chinese Conversations, namely CDConv. It contains 12K multi-turn conversations annotated with three typical contradiction categories: Intra-sentence Contradiction, Role Confusion, and History Contradiction. To efficiently construct the CDConv conversations, we devise a series of methods for automatic conversation generation, which simulate common user behaviors that trigger chatbots to make contradictions. We conduct careful manual quality screening of the constructed conversations and show that state-of-the-art Chinese chatbots can be easily goaded into making contradictions. Experiments on CDConv show that properly modeling contextual information is critical for dialogue contradiction detection, but there are still unresolved challenges that require future research. | null | null | 10.18653/v1/2022.emnlp-main.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,132 |
inproceedings | geva-etal-2022-transformer | Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.3/ | Geva, Mor and Caciularu, Avi and Wang, Kevin and Goldberg, Yoav | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 30--45 | Transformer-based language models (LMs) are at the core of modern NLP, but their internal prediction construction process is opaque and largely not understood. In this work, we make a substantial step towards unveiling this underlying prediction process, by reverse-engineering the operation of the feed-forward network (FFN) layers, one of the building blocks of transformer models. We view the token representation as a changing distribution over the vocabulary, and the output from each FFN layer as an additive update to that distribution. Then, we analyze the FFN updates in the vocabulary space, showing that each update can be decomposed to sub-updates corresponding to single FFN parameter vectors, each promoting concepts that are often human-interpretable. We then leverage these findings for controlling LM predictions, where we reduce the toxicity of GPT2 by almost 50{\%}, and for improving computation efficiency with a simple early exit rule, saving 20{\%} of computation on average. | null | null | 10.18653/v1/2022.emnlp-main.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,133 |
inproceedings | wang-etal-2022-learning-generate | Learning to Generate Question by Asking Question: A Primal-Dual Approach with Uncommon Word Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.4/ | Wang, Qifan and Yang, Li and Quan, Xiaojun and Feng, Fuli and Liu, Dongfang and Xu, Zenglin and Wang, Sinong and Ma, Hao | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 46--61 | Automatic question generation (AQG) is the task of generating a question from a given passage and an answer. Most existing AQG methods aim at encoding the passage and the answer to generate the question. However, limited work has focused on modeling the correlation between the target answer and the generated question. Moreover, unseen or rare word generation has not been studied in previous works. In this paper, we propose a novel approach which incorporates question generation with its dual problem, question answering, into a unified primal-dual framework. Specifically, the question generation component consists of an encoder that jointly encodes the answer with the passage, and a decoder that produces the question. The question answering component then re-asks the generated question on the passage to ensure that the target answer is obtained. We further introduce a knowledge distillation module to improve the model generalization ability. We conduct an extensive set of experiments on SQuAD and HotpotQA benchmarks. Experimental results demonstrate the superior performance of the proposed approach over several state-of-the-art methods. | null | null | 10.18653/v1/2022.emnlp-main.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,134 |
inproceedings | li-qian-2022-graph | Graph-based Model Generation for Few-Shot Relation Extraction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.5/ | Li, Wanli and Qian, Tieyun | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 62--71 | Few-shot relation extraction (FSRE) has been a challenging problem since it only has a handful of training instances. Existing models follow a {\textquoteleft}one-for-all' scheme where one general large model performs all individual N-way-K-shot tasks in FSRE, which prevents the model from achieving the optimal point on each task. In view of this, we propose a model generation framework that consists of one general model for all tasks and many tiny task-specific models for each individual task. The general model generates and passes the universal knowledge to the tiny models which will be further fine-tuned when performing specific tasks. In this way, we decouple the complexity of the entire task space from that of all individual tasks while absorbing the universal knowledge.Extensive experimental results on two public datasets demonstrate that our framework reaches a new state-of-the-art performance for FRSE tasks. Our code is available at: https://github.com/NLPWM-WHU/GM{\_}GEN. | null | null | 10.18653/v1/2022.emnlp-main.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,135 |
inproceedings | yoo-kwak-2022-backdoor | Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.6/ | Yoo, Ki Yoon and Kwak, Nojun | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 72--88 | Recent advances in federated learning have demonstrated its promising capability to learn on decentralized datasets. However, a considerable amount of work has raised concerns due to the potential risks of adversaries participating in the framework to poison the global model for an adversarial purpose. This paper investigates the feasibility of model poisoning for backdoor attacks through rare word embeddings of NLP models. In text classification, less than 1{\%} of adversary clients suffices to manipulate the model output without any drop in the performance of clean sentences. For a less complex dataset, a mere 0.1{\%} of adversary clients is enough to poison the global model effectively. We also propose a technique specialized in the federated learning scheme called gradient ensemble, which enhances the backdoor performance in all experimental settings. | null | null | 10.18653/v1/2022.emnlp-main.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,136 |
inproceedings | yang-etal-2022-generating | Generating Natural Language Proofs with Verifier-Guided Search | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.7/ | Yang, Kaiyu and Deng, Jia and Chen, Danqi | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 89--105 | Reasoning over natural language is a challenging problem in NLP. In this work, we focus on proof generation: Given a hypothesis and a set of supporting facts, the model generates a proof tree indicating how to derive the hypothesis from supporting facts. Compared to generating the entire proof in one shot, stepwise generation can better exploit the compositionality and generalize to longer proofs but has achieved limited success on real-world data. Existing stepwise methods struggle to generate proof steps that are both logically valid and relevant to the hypothesis. Instead, they tend to hallucinate invalid steps given the hypothesis. In this paper, we present a novel stepwise method, NLProofS (Natural Language Proof Search), which learns to generate relevant steps conditioning on the hypothesis. At the core of our approach, we train an independent verifier to check the validity of the proof steps to prevent hallucination. Instead of generating steps greedily, we search for proofs maximizing a global proof score judged by the verifier. NLProofS achieves state-of-the-art performance on EntailmentBank and RuleTaker. Specifically, it improves the correctness of predicted proofs from 27.7{\%} to 33.3{\%} in the distractor setting of EntailmentBank, demonstrating the effectiveness of NLProofS in generating challenging human-authored proofs. | null | null | 10.18653/v1/2022.emnlp-main.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,137 |
inproceedings | cho-etal-2022-toward | Toward Unifying Text Segmentation and Long Document Summarization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.8/ | Cho, Sangwoo and Song, Kaiqiang and Wang, Xiaoyang and Liu, Fei and Yu, Dong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 106--118 | Text segmentation is important for signaling a document`s structure. Without segmenting a long document into topically coherent sections, it is difficult for readers to comprehend the text, let alone find important information. The problem is only exacerbated by a lack of segmentation in transcripts of audio/video recordings. In this paper, we explore the role that section segmentation plays in extractive summarization of written and spoken documents. Our approach learns robust sentence representations by performing summarization and segmentation simultaneously, which is further enhanced by an optimization-based regularizer to promote selection of diverse summary sentences. We conduct experiments on multiple datasets ranging from scientific articles to spoken transcripts to evaluate the model`s performance. Our findings suggest that the model can not only achieve state-of-the-art performance on publicly available benchmarks, but demonstrate better cross-genre transferability when equipped with text segmentation. We perform a series of analyses to quantify the impact of section segmentation on summarizing written and spoken documents of substantial length and complexity. | null | null | 10.18653/v1/2022.emnlp-main.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,138 |
inproceedings | chang-etal-2022-geometry | The Geometry of Multilingual Language Model Representations | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.9/ | Chang, Tyler and Tu, Zhuowen and Bergen, Benjamin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 119--136 | We assess how multilingual language models maintain a shared multilingual representation space while still encoding language-sensitive information in each language. Using XLM-R as a case study, we show that languages occupy similar linear subspaces after mean-centering, evaluated based on causal effects on language modeling performance and direct comparisons between subspaces for 88 languages. The subspace means differ along language-sensitive axes that are relatively stable throughout middle layers, and these axes encode information such as token vocabularies. Shifting representations by language means is sufficient to induce token predictions in different languages. However, we also identify stable language-neutral axes that encode information such as token positions and part-of-speech. We visualize representations projected onto language-sensitive and language-neutral axes, identifying language family and part-of-speech clusters, along with spirals, toruses, and curves representing token position information. These results demonstrate that multilingual language models encode information along orthogonal language-sensitive and language-neutral axes, allowing the models to extract a variety of features for downstream tasks and cross-lingual transfer learning. | null | null | 10.18653/v1/2022.emnlp-main.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,139 |
inproceedings | tang-etal-2022-improving | Improving Complex Knowledge Base Question Answering via Question-to-Action and Question-to-Question Alignment | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.10/ | Tang, Yechun and Cheng, Xiaoxia and Lu, Weiming | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 137--147 | Complex knowledge base question answering can be achieved by converting questions into sequences of predefined actions. However, there is a significant semantic and structural gap between natural language and action sequences, which makes this conversion difficult. In this paper, we introduce an alignment-enhanced complex question answering framework, called ALCQA, which mitigates this gap through question-to-action alignment and question-to-question alignment. We train a question rewriting model to align the question and each action, and utilize a pretrained language model to implicitly align the question and KG artifacts. Moreover, considering that similar questions correspond to similar action sequences, we retrieve top-k similar question-answer pairs at the inference stage through question-to-question alignment and propose a novel reward-guided action sequence selection strategy to select from candidate action sequences. We conduct experiments on CQA and WQSP datasets, and the results show that our approach outperforms state-of-the-art methods and obtains a 9.88{\%} improvements in the F1 metric on CQA dataset. Our source code is available at \url{https://github.com/TTTTTTTTy/ALCQA}. | null | null | 10.18653/v1/2022.emnlp-main.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,140 |
inproceedings | min-etal-2022-pair | {PAIR}: Prompt-Aware marg{I}n Ranking for Counselor Reflection Scoring in Motivational Interviewing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.11/ | Min, Do June and P{\'e}rez-Rosas, Ver{\'o}nica and Resnicow, Kenneth and Mihalcea, Rada | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 148--158 | Counselor reflection is a core verbal skill used by mental health counselors to express understanding and affirmation of the client`s experience and concerns. In this paper, we propose a system for the analysis of counselor reflections. Specifically, our system takes as input one dialog turn containing a client prompt and a counselor response, and outputs a score indicating the level of reflection in the counselor response. We compile a dataset consisting of different levels of reflective listening skills, and propose the Prompt-Aware margIn Ranking (PAIR) framework that contrasts positive and negative prompt and response pairs using specially designed multi-gap and prompt-aware margin ranking losses. Through empirical evaluations and deployment of our system in a real-life educational environment, we show that our analysis model outperforms several baselines on different metrics, and can be used to provide useful feedback to counseling trainees. | null | null | 10.18653/v1/2022.emnlp-main.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,141 |
inproceedings | xing-tsang-2022-co | Co-guiding Net: Achieving Mutual Guidances between Multiple Intent Detection and Slot Filling via Heterogeneous Semantics-Label Graphs | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.12/ | Xing, Bowen and Tsang, Ivor | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 159--169 | Recent graph-based models for joint multiple intent detection and slot filling have obtained promising results through modeling the guidance from the prediction of intents to the decoding of slot filling.However, existing methods (1) only model the \textit{unidirectional guidance} from intent to slot; (2) adopt \textit{homogeneous graphs} to model the interactions between the slot semantics nodes and intent label nodes, which limit the performance.In this paper, we propose a novel model termed Co-guiding Net, which implements a two-stage framework achieving the \textit{mutual guidances} between the two tasks.In the first stage, the initial estimated labels of both tasks are produced, and then they are leveraged in the second stage to model the mutual guidances.Specifically, we propose two \textit{heterogeneous graph attention networks} working on the proposed two \textit{heterogeneous semantics-label graphs}, which effectively represent the relations among the semantics nodes and label nodes.Experiment results show that our model outperforms existing models by a large margin, obtaining a relative improvement of 19.3{\%} over the previous best model on MixATIS dataset in overall accuracy. | null | null | 10.18653/v1/2022.emnlp-main.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,142 |
inproceedings | xu-etal-2022-importance | The Importance of Being Parameters: An Intra-Distillation Method for Serious Gains | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.13/ | Xu, Haoran and Koehn, Philipp and Murray, Kenton | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 170--183 | Recent model pruning methods have demonstrated the ability to remove redundant parameters without sacrificing model performance. Common methods remove redundant parameters according to the parameter sensitivity, a gradient-based measure reflecting the contribution of the parameters. In this paper, however, we argue that redundant parameters can be trained to make beneficial contributions. We first highlight the large sensitivity (contribution) gap among high-sensitivity and low-sensitivity parameters and show that the model generalization performance can be significantly improved after balancing the contribution of all parameters. Our goal is to balance the sensitivity of all parameters and encourage all of them to contribute equally. We propose a general task-agnostic method, namely intra-distillation, appended to the regular training loss to balance parameter sensitivity. Moreover, we also design a novel adaptive learning method to control the strength of intra-distillation loss for faster convergence. Our experiments show the strong effectiveness of our methods on machine translation, natural language understanding, and zero-shot cross-lingual transfer across up to 48 languages, e.g., a gain of 3.54 BLEU on average across 8 language pairs from the IWSLT`14 dataset. | null | null | 10.18653/v1/2022.emnlp-main.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,143 |
inproceedings | yin-neubig-2022-interpreting | Interpreting Language Models with Contrastive Explanations | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.14/ | Yin, Kayo and Neubig, Graham | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 184--198 | Model interpretability methods are often used to explain NLP model decisions on tasks such as text classification, where the output space is relatively small. However, when applied to language generation, where the output space often consists of tens of thousands of tokens, these methods are unable to provide informative explanations. Language models must consider various features to predict a token, such as its part of speech, number, tense, or semantics.Existing explanation methods conflate evidence for all these features into a single explanation, which is less interpretable for human understanding.To disentangle the different decisions in language modeling, we focus on explaining language models contrastively: we look for salient input tokens that explain why the model predicted one token instead of another. We demonstrate that contrastive explanations are quantifiably better than non-contrastive explanations in verifying major grammatical phenomena, and that they significantly improve contrastive model simulatability for human observers. We also identify groups of contrastive decisions where the model uses similar evidence, and we are able to characterize what input tokens models use during various language generation decisions. | null | null | 10.18653/v1/2022.emnlp-main.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,144 |
inproceedings | krishna-etal-2022-rankgen | {R}ank{G}en: Improving Text Generation with Large Ranking Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.15/ | Krishna, Kalpesh and Chang, Yapei and Wieting, John and Iyyer, Mohit | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 199--232 | Given an input sequence (or prefix), modern language models often assign high probabilities to output sequences that are repetitive, incoherent, or irrelevant to the prefix; as such, model-generated text also contains such artifacts. To address these issues we present RankGen, a 1.2B parameter encoder model for English that scores model generations given a prefix. RankGen can be flexibly incorporated as a scoring function in beam search and used to decode from any pretrained language model. We train RankGen using large-scale contrastive learning to map a prefix close to the ground-truth sequence that follows it and far away from two types of negatives: (1) random sequences from the same document as the prefix, and (2) sequences generated from a large language model conditioned on the prefix. Experiments across four different language models (345M-11B parameters) and two domains show that RankGen significantly outperforms decoding algorithms like nucleus, top-k, and typical sampling on both automatic metrics (85.0 vs 77.3 MAUVE) as well as human evaluations with English writers (74.5{\%} human preference over nucleus sampling). Analysis reveals that RankGen outputs are more relevant to the prefix and improve continuity and coherence compared to baselines. We release our model checkpoints, code, and human preference data with explanations to facilitate future research. | null | null | 10.18653/v1/2022.emnlp-main.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,145 |
inproceedings | zhang-etal-2022-learning-grammar | Learning a Grammar Inducer from Massive Uncurated Instructional Videos | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.16/ | Zhang, Songyang and Song, Linfeng and Jin, Lifeng and Mi, Haitao and Xu, Kun and Yu, Dong and Luo, Jiebo | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 233--247 | Video-aided grammar induction aims to leverage video information for finding more accurate syntactic grammars for accompanying text. While previous work focuses on building systems for inducing grammars on text that are well-aligned with video content, we investigate the scenario, in which text and video are only in loose correspondence. Such data can be found in abundance online, and the weak correspondence is similar to the indeterminacy problem studied in language acquisition. Furthermore, we build a new model that can better learn video-span correlation without manually designed features adopted by previous work. Experiments show that our model trained only on large-scale YouTube data with no text-video alignment reports strong and robust performances across three unseen datasets, despite domain shift and noisy label issues. Furthermore our model yields higher F1 scores than the previous state-of-the-art systems trained on in-domain data. | null | null | 10.18653/v1/2022.emnlp-main.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,146 |
inproceedings | park-etal-2022-normalized | Normalized Contrastive Learning for Text-Video Retrieval | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.17/ | Park, Yookoon and Azab, Mahmoud and Moon, Seungwhan and Xiong, Bo and Metze, Florian and Kundu, Gourab and Ahmed, Kirmani | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 248--260 | Cross-modal contrastive learning has led the recent advances in multimodal retrieval with its simplicity and effectiveness. In this work, however, we reveal that cross-modal contrastive learning suffers from incorrect normalization of the sum retrieval probabilities of each text or video instance. Specifically, we show that many test instances are either over- or under-represented during retrieval, significantly hurting the retrieval performance. To address this problem, we propose Normalized Contrastive Learning (NCL) which utilizes the Sinkhorn-Knopp algorithm to compute the instance-wise biases that properly normalize the sum retrieval probabilities of each instance so that every text and video instance is fairly represented during cross-modal retrieval. Empirical study shows that NCL brings consistent and significant gains in text-video retrieval on different model architectures, with new state-of-the-art multimodal retrieval metrics on the ActivityNet, MSVD, and MSR-VTT datasets without any architecture engineering. | null | null | 10.18653/v1/2022.emnlp-main.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,147 |
inproceedings | lang-etal-2022-estimating | Estimating Soft Labels for Out-of-Domain Intent Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.18/ | Lang, Hao and Zheng, Yinhe and Sun, Jian and Huang, Fei and Si, Luo and Li, Yongbin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 261--276 | Out-of-Domain (OOD) intent detection is important for practical dialog systems. To alleviate the issue of lacking OOD training samples, some works propose synthesizing pseudo OOD samples and directly assigning one-hot OOD labels to these pseudo samples. However, these one-hot labels introduce noises to the training process because some {\textquotedblleft}hard{\textquotedblright} pseudo OOD samples may coincide with In-Domain (IND) intents. In this paper, we propose an adaptive soft pseudo labeling (ASoul) method that can estimate soft labels for pseudo OOD samples when training OOD detectors. Semantic connections between pseudo OOD samples and IND intents are captured using an embedding graph. A co-training framework is further introduced to produce resulting soft labels following the smoothness assumption, i.e., close samples are likely to have similar labels. Extensive experiments on three benchmark datasets show that ASoul consistently improves the OOD detection performance and outperforms various competitive baselines. | null | null | 10.18653/v1/2022.emnlp-main.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,148 |
inproceedings | yeh-etal-2022-multi | Multi-{VQG}: Generating Engaging Questions for Multiple Images | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.19/ | Yeh, Min-Hsuan and Chen, Vincent and Huang, Ting-Hao and Ku, Lun-Wei | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 277--290 | Generating engaging content has drawn much recent attention in the NLP community. Asking questions is a natural way to respond to photos and promote awareness. However, most answers to questions in traditional question-answering (QA) datasets are factoids, which reduce individuals' willingness to answer. Furthermore, traditional visual question generation (VQG) confines the source data for question generation to single images, resulting in a limited ability to comprehend time-series information of the underlying event. In this paper, we propose generating engaging questions from multiple images. We present MVQG, a new dataset, and establish a series of baselines, including both end-to-end and dual-stage architectures. Results show that building stories behind the image sequence enables models togenerate engaging questions, which confirms our assumption that people typically construct a picture of the event in their minds before asking questions. These results open up an exciting challenge for visual-and-language models to implicitly construct a story behind a series of photos to allow for creativity and experience sharing and hence draw attention to downstream applications. | null | null | 10.18653/v1/2022.emnlp-main.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,149 |
inproceedings | bulian-etal-2022-tomayto | Tomayto, Tomahto. Beyond Token-level Answer Equivalence for Question Answering Evaluation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.20/ | Bulian, Jannis and Buck, Christian and Gajewski, Wojciech and B{\"orschinger, Benjamin and Schuster, Tal | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 291--305 | The predictions of question answering (QA) systems are typically evaluated against manually annotated finite sets of one or more answers. This leads to a coverage limitation that results in underestimating the true performance of systems, and is typically addressed by extending over exact match (EM) with predefined rules or with the token-level F1 measure.In this paper, we present the first systematic conceptual and data-driven analysis to examine the shortcomings of token-level equivalence measures.To this end, we define the asymmetric notion of answer equivalence (AE), accepting answers that are equivalent to or improve over the reference, and publish over 23k human judgements for candidates produced by multiple QA systems on SQuAD.Through a careful analysis of this data, we reveal and quantify several concrete limitations of the F1 measure, such as a false impression of graduality, or missing dependence on the question.Since collecting AE annotations for each evaluated model is expensive, we learn a BERT matching (BEM) measure to approximate this task. Being a simpler task than QA, we find BEM to provide significantly better AE approximations than F1, and to more accurately reflect the performance of systems.Finally, we demonstrate the practical utility of AE and BEM on the concrete application of minimal accurate prediction sets, reducing the number of required answers by up to X2.6. | null | null | 10.18653/v1/2022.emnlp-main.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,150 |
inproceedings | du-etal-2022-non | Non-Parametric Domain Adaptation for End-to-End Speech Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.21/ | Du, Yichao and Wang, Weizhi and Zhang, Zhirui and Chen, Boxing and Xu, Tong and Xie, Jun and Chen, Enhong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 306--320 | The end-to-end speech translation (E2E-ST) has received increasing attention due to the potential of its less error propagation, lower latency and fewer parameters. However, the effectiveness of neural-based approaches to this task is severely limited by the available training corpus, especially for domain adaptation where in-domain triplet data is scarce or nonexistent. In this paper, we propose a novel non-parametric method that leverages in-domain text translation corpus to achieve domain adaptation for E2E-ST systems. To this end, we first incorporate an additional encoder into the pre-trained E2E-ST model to realize text translation modeling, based on which the decoder`s output representations for text and speech translation tasks are unified by reducing the correspondent representation mismatch in available triplet training data. During domain adaptation, a k-nearest-neighbor (kNN) classifier is introduced to produce the final translation distribution using the external datastore built by the domain-specific text translation corpus, while the universal output representation is adopted to perform a similarity search. Experiments on the Europarl-ST benchmark demonstrate that when in-domain text translation data is involved only, our proposed approach significantly improves baseline by 12.82 BLEU on average in all translation directions, even outperforming the strong in-domain fine-tuning strategy. | null | null | 10.18653/v1/2022.emnlp-main.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,151 |
inproceedings | cao-etal-2022-prompting | Prompting for Multimodal Hateful Meme Classification | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.22/ | Cao, Rui and Lee, Roy Ka-Wei and Chong, Wen-Haw and Jiang, Jing | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 321--332 | Hateful meme classification is a challenging multimodal task that requires complex reasoning and contextual background knowledge. Ideally, we could leverage an explicit external knowledge base to supplement contextual and cultural information in hateful memes. However, there is no known explicit external knowledge base that could provide such hate speech contextual information. To address this gap, we propose PromptHate, a simple yet effective prompt-based model that prompts pre-trained language models (PLMs) for hateful meme classification. Specifically, we construct simple prompts and provide a few in-context examples to exploit the implicit knowledge in the pre-trained RoBERTa language model for hateful meme classification. We conduct extensive experiments on two publicly available hateful and offensive meme datasets. Our experiment results show that PromptHate is able to achieve a high AUC of 90.96, outperforming state-of-the-art baselines on the hateful meme classification task. We also perform fine-grain analyses and case studies on various prompt settings and demonstrate the effectiveness of the prompts on hateful meme classification. | null | null | 10.18653/v1/2022.emnlp-main.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,152 |
inproceedings | li-etal-2022-certified | Certified Error Control of Candidate Set Pruning for Two-Stage Relevance Ranking | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.23/ | Li, Minghan and Zhang, Xinyu and Xin, Ji and Zhang, Hongyang and Lin, Jimmy | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 333--345 | In information retrieval (IR), candidate set pruning has been commonly used to speed up two-stage relevance ranking. However, such an approach lacks accurate error control and often trades accuracy against computational efficiency in an empirical fashion, missing theoretical guarantees. In this paper, we propose the concept of certified error control of candidate set pruning for relevance ranking, which means that the test error after pruning is guaranteed to be controlled under a user-specified threshold with high probability. Both in-domain and out-of-domain experiments show that our method successfully prunes the first-stage retrieved candidate sets to improve the second-stage reranking speed while satisfying the pre-specified accuracy constraints in both settings. For example, on MS MARCO Passage v1, our method reduces the average candidate set size from 1000 to 27, increasing reranking speed by about 37 times, while keeping MRR@10 greater than a pre-specified value of 0.38 with about 90{\%} empirical coverage. In contrast, empirical baselines fail to meet such requirements. Code and data are available at: https://github.com/alexlimh/CEC-Ranking. | null | null | 10.18653/v1/2022.emnlp-main.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,153 |
inproceedings | zhang-cai-2022-linearizing | Linearizing Transformer with Key-Value Memory | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.24/ | Zhang, Yizhe and Cai, Deng | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 346--359 | Efficient transformer variants with linear time complexity have been developed to mitigate the quadratic computational overhead of the vanilla transformer. Among them are low-rank projection methods such as Linformer and kernel-based Transformers. Despite their unique merits, they usually suffer from a performance drop comparing with the vanilla transformer on many sequence generation tasks, and often fail to obtain computation gain when the generation is short. We propose Memsizer, an approach towards closing the performance gap while improving the efficiency even with short generation. It projects the source sequences into lower dimension representations like Linformer, while enjoying efficient recurrent-style incremental computation similar to kernel-based transformers. This yields linear computation time and constant memory complexity at inference time. Memsizer also employs a lightweight multi-head mechanism which renders the computation as light as a single-head model. We demonstrate that Memsizer provides an improved balance between efficiency and accuracy over the vanilla transformer and other efficient transformer variants in three typical sequence generation tasks, including machine translation, abstractive text summarization, and language modeling. | null | null | 10.18653/v1/2022.emnlp-main.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,154 |
inproceedings | verma-etal-2022-robustness | Robustness of Fusion-based Multimodal Classifiers to Cross-Modal Content Dilutions | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.25/ | Verma, Gaurav and Vinay, Vishwa and Rossi, Ryan and Kumar, Srijan | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 360--374 | As multimodal learning finds applications in a wide variety of high-stakes societal tasks, investigating their robustness becomes important. Existing work has focused on understanding the robustness of vision-and-language models to imperceptible variations on benchmark tasks. In this work, we investigate the robustness of multimodal classifiers to cross-modal dilutions {--} a plausible variation. We develop a model that, given a multimodal (image + text) input, generates additional dilution text that (a) maintains relevance and topical coherence with the image and existing text, and (b) when added to the original text, leads to misclassification of the multimodal input. Via experiments on Crisis Humanitarianism and Sentiment Detection tasks, we find that the performance of task-specific fusion-based multimodal classifiers drops by 23.3{\%} and 22.5{\%}, respectively, in the presence of dilutions generated by our model. Metric-based comparisons with several baselines and human evaluations indicate that our dilutions show higher relevance and topical coherence, while simultaneously being more effective at demonstrating the brittleness of the multimodal classifiers. Our work aims to highlight and encourage further research on the robustness of deep multimodal models to realistic variations, especially in human-facing societal applications. | null | null | 10.18653/v1/2022.emnlp-main.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,155 |
inproceedings | edwards-etal-2022-translation | Translation between Molecules and Natural Language | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.26/ | Edwards, Carl and Lai, Tuan and Ros, Kevin and Honke, Garrett and Cho, Kyunghyun and Ji, Heng | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 375--413 | We present MolT5 - a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. MolT5 allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since MolT5 pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that MolT5-based models are able to generate outputs, both molecules and captions, which in many cases are high quality. | null | null | 10.18653/v1/2022.emnlp-main.26 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,156 |
inproceedings | finlayson-etal-2022-makes | What Makes Instruction Learning Hard? An Investigation and a New Challenge in a Synthetic Environment | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.27/ | Finlayson, Matthew and Richardson, Kyle and Sabharwal, Ashish and Clark, Peter | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 414--426 | The instruction learning paradigm{---}where a model learns to perform new tasks from task descriptions alone{---}has become popular in research on general-purpose models. The capabilities of large transformer models as instruction learners, however, remain poorly understood. We use a controlled synthetic environment to characterize such capabilities. Specifically, we use the task of deciding whether a given string matches a regular expression (viewed as an instruction) to identify properties of tasks, instructions, and instances that make instruction learning challenging. For instance, we find that our model, a fine-tuned T5-based text2text transformer, struggles with large regular languages, suggesting that less precise instructions are challenging for models. Instruction executions that require tracking longer contexts of prior steps are also difficult. We use our findings to systematically construct a challenging instruction learning dataset, which we call Hard RegSet. Fine-tuning on Hard RegSet, our large transformer learns to correctly interpret (with at least 90{\%} accuracy) only 65.6{\%} of test instructions, and 11{\%}-24{\%} of the instructions in out-of-distribution generalization settings. We thus propose Hard RegSet as a challenging instruction learning dataset, and a controlled environment for studying instruction learning. | null | null | 10.18653/v1/2022.emnlp-main.27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,157 |
inproceedings | grenander-etal-2022-sentence | Sentence-Incremental Neural Coreference Resolution | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.28/ | Grenander, Matt and Cohen, Shay B. and Steedman, Mark | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 427--443 | We propose a sentence-incremental neural coreference resolution system which incrementally builds clusters after marking mention boundaries in a shift-reduce method. The system is aimed at bridging two recent approaches at coreference resolution: (1) state-of-the-art non-incremental models that incur quadratic complexity in document length with high computational cost, and (2) memory network-based models which operate incrementally but do not generalize beyond pronouns. For comparison, we simulate an incremental setting by constraining non-incremental systems to form partial coreference chains before observing new sentences. In this setting, our system outperforms comparable state-of-the-art methods by 2 F1 on OntoNotes and 6.8 F1 on the CODI-CRAC 2021 corpus. In a conventional coreference setup, our system achieves 76.3 F1 on OntoNotes and 45.5 F1 on CODI-CRAC 2021, which is comparable to state-of-the-art baselines. We also analyze variations of our system and show that the degree of incrementality in the encoder has a surprisingly large effect on the resulting performance. | null | null | 10.18653/v1/2022.emnlp-main.28 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,158 |
inproceedings | goyal-etal-2022-snac | {SN}a{C}: Coherence Error Detection for Narrative Summarization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.29/ | Goyal, Tanya and Li, Junyi Jessy and Durrett, Greg | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 444--463 | Progress in summarizing long texts is inhibited by the lack of appropriate evaluation frameworks. A long summary that appropriately covers the facets of that text must also present a coherent narrative, but current automatic and human evaluation methods fail to identify gaps in coherence. In this work, we introduce SNaC, a narrative coherence evaluation framework for fine-grained annotations of long summaries. We develop a taxonomy of coherence errors in generated narrative summaries and collect span-level annotations for 6.6k sentences across 150 book and movie summaries. Our work provides the first characterization of coherence errors generated by state-of-the-art summarization models and a protocol for eliciting coherence judgments from crowdworkers. Furthermore, we show that the collected annotations allow us to benchmark past work in coherence modeling and train a strong classifier for automatically localizing coherence errors in generated summaries. Finally, our SNaC framework can support future work in long document summarization and coherence evaluation, including improved summarization modeling and post-hoc summary correction. | null | null | 10.18653/v1/2022.emnlp-main.29 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,159 |
inproceedings | goyal-etal-2022-hydrasum | {H}ydra{S}um: Disentangling Style Features in Text Summarization with Multi-Decoder Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.30/ | Goyal, Tanya and Rajani, Nazneen and Liu, Wenhao and Kryscinski, Wojciech | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 464--479 | Summarization systems make numerous {\textquotedblleft}decisions{\textquotedblright} about summary properties during inference, e.g. degree of copying, specificity and length of outputs, etc. However, these are implicitly encoded within model parameters and specific styles cannot be enforced. To address this, we introduce HydraSum, a new summarization architecture that extends the single decoder framework of current models to a mixture-of-experts version with multiple decoders. We show that HydraSum`s multiple decoders automatically learn contrasting summary styles when trained under the standard training objective without any extra supervision. Through experiments on three summarization datasets (CNN, Newsroom and XSum), we show that HydraSum provides a simple mechanism to obtain stylistically-diverse summaries by sampling from either individual decoders or their mixtures, outperforming baseline models. Finally, we demonstrate that a small modification to the gating strategy during training can enforce an even stricter style partitioning, e.g. high- vs low-abstractiveness or high- vs low-specificity, allowing users to sample from a larger area in the generation space and vary summary styles along multiple dimensions. | null | null | 10.18653/v1/2022.emnlp-main.30 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,160 |
inproceedings | jin-etal-2022-good-neighbor | A Good Neighbor, A Found Treasure: Mining Treasured Neighbors for Knowledge Graph Entity Typing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.31/ | Jin, Zhuoran and Cao, Pengfei and Chen, Yubo and Liu, Kang and Zhao, Jun | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 480--490 | The task of knowledge graph entity typing (KGET) aims to infer the missing types for entities in knowledge graphs. Some pioneering work has proved that neighbor information is very important for the task. However, existing methods only leverage the one-hop neighbor information of the central entity, ignoring the multi-hop neighbor information that can provide valuable clues for inference. Besides, we also observe that there are co-occurrence relations between types, which is very helpful to alleviate false-negative problem. In this paper, we propose a novel method called Mining Treasured Neighbors (MiNer) to make use of these two characteristics. Firstly, we devise a Neighbor Information Aggregation module to aggregate the neighbor information. Then, we propose an Entity Type Inference module to mitigate the adverse impact of the irrelevant neighbor information. Finally, a Type Co-occurrence Regularization module is designed to prevent the model from overfitting the false negative examples caused by missing types. Experimental results on two widely used datasets indicate that our approach significantly outperforms previous state-of-the-art methods. | null | null | 10.18653/v1/2022.emnlp-main.31 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,161 |
inproceedings | liu-etal-2022-guiding | Guiding Neural Entity Alignment with Compatibility | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.32/ | Liu, Bing and Scells, Harrisen and Hua, Wen and Zuccon, Guido and Zhao, Genghong and Zhang, Xia | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 491--504 | Entity Alignment (EA) aims to find equivalent entities between two Knowledge Graphs (KGs). While numerous neural EA models have been devised, they are mainly learned using labelled data only. In this work, we argue that different entities within one KG should have compatible counterparts in the other KG due to the potential dependencies among the entities. Making compatible predictions thus should be one of the goals of training an EA model along with fitting the labelled data: this aspect however is neglected in current methods. To power neural EA models with compatibility, we devise a training framework by addressing three problems: (1) how to measure the compatibility of an EA model; (2) how to inject the property of being compatible into an EA model; (3) how to optimise parameters of the compatibility model. Extensive experiments on widely-used datasets demonstrate the advantages of integrating compatibility within EA models. In fact, state-of-the-art neural EA models trained within our framework using just 5{\%} of the labelled data can achieve comparable effectiveness with supervised training using 20{\%} of the labelled data. | null | null | 10.18653/v1/2022.emnlp-main.32 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,162 |
inproceedings | gupta-etal-2022-instructdial | {I}nstruct{D}ial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.33/ | Gupta, Prakhar and Jiao, Cathy and Yeh, Yi-Ting and Mehri, Shikib and Eskenazi, Maxine and Bigham, Jeffrey | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 505--525 | Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Dialogue is an especially interesting area in which to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. We explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks. | null | null | 10.18653/v1/2022.emnlp-main.33 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,163 |
inproceedings | jiang-etal-2022-unsupervised | Unsupervised Boundary-Aware Language Model Pretraining for {C}hinese Sequence Labeling | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.34/ | Jiang, Peijie and Long, Dingkun and Zhang, Yanzhao and Xie, Pengjun and Zhang, Meishan and Zhang, Min | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 526--537 | Boundary information is critical for various Chinese language processing tasks, such as word segmentation, part-of-speech tagging, and named entity recognition. Previous studies usually resorted to the use of a high-quality external lexicon, where lexicon items can offer explicit boundary information. However, to ensure the quality of the lexicon, great human effort is always necessary, which has been generally ignored. In this work, we suggest unsupervised statistical boundary information instead, and propose an architecture to encode the information directly into pre-trained language models, resulting in Boundary-Aware BERT (BABERT). We apply BABERT for feature induction of Chinese sequence labeling tasks. Experimental results on ten benchmarks of Chinese sequence labeling demonstrate that BABERT can provide consistent improvements on all datasets. In addition, our method can complement previous supervised lexicon exploration, where further improvements can be achieved when integrated with external lexicon information. | null | null | 10.18653/v1/2022.emnlp-main.34 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,164 |
inproceedings | xiao-etal-2022-retromae | {R}etro{MAE}: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.35/ | Xiao, Shitao and Liu, Zheng and Shao, Yingxia and Cao, Zhao | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 538--548 | Despite pre-training`s progress in many important NLP tasks, it remains to explore effective pre-training strategies for dense retrieval. In this paper, we propose RetroMAE, a new retrieval oriented pre-training paradigm based on Masked Auto-Encoder (MAE). RetroMAE is highlighted by three critical designs. 1) A novel MAE workflow, where the input sentence is polluted for encoder and decoder with different masks. The sentence embedding is generated from the encoder`s masked input; then, the original sentence is recovered based on the sentence embedding and the decoder`s masked input via masked language modeling. 2) Asymmetric model structure, with a full-scale BERT like transformer as encoder, and a one-layer transformer as decoder. 3) Asymmetric masking ratios, with a moderate ratio for encoder: 15 30{\%}, and an aggressive ratio for decoder: 50 70{\%}. Our framework is simple to realize and empirically competitive: the pre-trained models dramatically improve the SOTA performances on a wide range of dense retrieval benchmarks, like BEIR and MS MARCO. The source code and pre-trained models are made publicly available at https://github.com/staoxiao/RetroMAE so as to inspire more interesting research. | null | null | 10.18653/v1/2022.emnlp-main.35 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,165 |
inproceedings | zhou-etal-2022-aligning | Aligning Recommendation and Conversation via Dual Imitation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.36/ | Zhou, Jinfeng and Wang, Bo and Huang, Minlie and Zhao, Dongming and Huang, Kun and He, Ruifang and Hou, Yuexian | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 549--561 | Human conversations of recommendation naturally involve the shift of interests which can align the recommendation actions and conversation process to make accurate recommendations with rich explanations. However, existing conversational recommendation systems (CRS) ignore the advantage of user interest shift in connecting recommendation and conversation, which leads to an ineffective loose coupling structure of CRS. To address this issue, by modeling the recommendation actions as recommendation paths in a knowledge graph (KG), we propose DICR (\textbf{D}ual \textbf{I}mitation for \textbf{C}onversational \textbf{R}ecommendation), which designs a dual imitation to explicitly align the recommendation paths and user interest shift paths in a recommendation module and a conversation module, respectively. By exchanging alignment signals, DICR achieves bidirectional promotion between recommendation and conversation modules and generates high-quality responses with accurate recommendations and coherent explanations. Experiments demonstrate that DICR outperforms the state-of-the-art models on recommendation and conversation performance with automatic, human, and novel explainability metrics. | null | null | 10.18653/v1/2022.emnlp-main.36 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,166 |
inproceedings | wang-etal-2022-qrelscore | {QR}el{S}core: Better Evaluating Generated Questions with Deeper Understanding of Context-aware Relevance | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.37/ | Wang, Xiaoqiang and Liu, Bang and Tang, Siliang and Wu, Lingfei | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 562--581 | Existing metrics for assessing question generation not only require costly human reference but also fail to take into account the input context of generation, rendering the lack of deep understanding of the relevance between the generated questions and input contexts. As a result, they may wrongly penalize a legitimate and reasonable candidate question when it (1) involves complicated reasoning with the context or (2) can be grounded by multiple evidences in the context.In this paper, we propose QRelScore, a context-aware Relevance evaluation metric for Question Generation.Based on off-the-shelf language models such as BERT and GPT2, QRelScore employs both word-level hierarchical matching and sentence-level prompt-based generation to cope with the complicated reasoning and diverse generation from multiple evidences, respectively.Compared with existing metrics, our experiments demonstrate that QRelScore is able to achieve a higher correlation with human judgments while being much more robust to adversarial samples. | null | null | 10.18653/v1/2022.emnlp-main.37 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,167 |
inproceedings | ji-etal-2022-abstract | Abstract Visual Reasoning with Tangram Shapes | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.38/ | Ji, Anya and Kojima, Noriyuki and Rush, Noah and Suhr, Alane and Vong, Wai Keen and Hawkins, Robert and Artzi, Yoav | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 582--601 | We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with {\ensuremath{>}}1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. | null | null | 10.18653/v1/2022.emnlp-main.38 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,168 |
inproceedings | xie-etal-2022-unifiedskg | {U}nified{SKG}: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.39/ | Xie, Tianbao and Wu, Chen Henry and Shi, Peng and Zhong, Ruiqi and Scholak, Torsten and Yasunaga, Michihiro and Wu, Chien-Sheng and Zhong, Ming and Yin, Pengcheng and Wang, Sida I. and Zhong, Victor and Wang, Bailin and Li, Chengzu and Boyle, Connor and Ni, Ansong and Yao, Ziyu and Radev, Dragomir and Xiong, Caiming and Kong, Lingpeng and Zhang, Rui and Smith, Noah A. and Zettlemoyer, Luke and Yu, Tao | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 602--631 | Structured knowledge grounding (SKG) leverages structured knowledge to complete user requests, such as semantic parsing over databases and question answering over knowledge bases. Since the inputs and outputs of SKG tasks are heterogeneous, they have been studied separately by different communities, which limits systematic and compatible research on SKG. In this paper, we overcome this limitation by proposing the UnifiedSKG framework, which unifies 21 SKG tasks into a text-to-text format, aiming to promote systematic SKG research, instead of being exclusive to a single task, domain, or dataset. We use UnifiedSKG to benchmark T5 with different sizes and show that T5, with simple modifications when necessary, achieves state-of-the-art performance on almost all of the 21 tasks. We further demonstrate that multi-task prefix-tuning improves the performance on most tasks, largely improving the overall performance. UnifiedSKG also facilitates the investigation of zero-shot and few-shot learning, and we show that T0, GPT-3, and Codex struggle in zero-shot and few-shot learning for SKG. We also use UnifiedSKG to conduct a series of controlled experiments on structured knowledge encoding variants across SKG tasks. UnifiedSKG is easily extensible to more tasks, and it is open-sourced at https://github.com/hkunlp/unifiedskg. | null | null | 10.18653/v1/2022.emnlp-main.39 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,169 |
inproceedings | chen-etal-2022-balanced | Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in {NLP} Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.40/ | Chen, Hannah and Ji, Yangfeng and Evans, David | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 632--647 | Traditional (fickle) adversarial examples involve finding a small perturbation that does not change an input`s true label but confuses the classifier into outputting a different prediction. Conversely, obstinate adversarial examples occur when an adversary finds a small perturbation that preserves the classifier`s prediction but changes the true label of an input.Adversarial training and certified robust training have shown some effectiveness in improving the robustness of machine learnt models to fickle adversarial examples. We show that standard adversarial training methods focused on reducing vulnerability to fickle adversarial examples may make a model more vulnerable to obstinate adversarial examples, with experiments for both natural language inference and paraphrase identification tasks. To counter this phenomenon, we introduce Balanced Adversarial Training, which incorporates contrastive learning to increase robustness against both fickle and obstinate adversarial examples. | null | null | 10.18653/v1/2022.emnlp-main.40 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,170 |
inproceedings | sikarwar-etal-2022-transformers | When Can Transformers Ground and Compose: Insights from Compositional Generalization Benchmarks | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.41/ | Sikarwar, Ankur and Patel, Arkil and Goyal, Navin | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 648--669 | Humans can reason compositionally whilst grounding language utterances to the real world. Recent benchmarks like ReaSCAN (Wu et al., 2021) use navigation tasks grounded in a grid world to assess whether neural models exhibit similar capabilities. In this work, we present a simple transformer-based model that outperforms specialized architectures on ReaSCAN and a modified version (Qiu et al., 2021) of gSCAN (Ruis et al., 2020). On analyzing the task, we find that identifying the target location in the grid world is the main challenge for the models. Furthermore, we show that a particular split in ReaSCAN, which tests depth generalization, is unfair. On an amended version of this split, we show that transformers can generalize to deeper input structures. Finally, we design a simpler grounded compositional generalization task, RefEx, to investigate how transformers reason compositionally. We show that a single self-attention layer with a single head generalizes to novel combinations of object attributes. Moreover, we derive a precise mathematical construction of the transformer`s computations from the learned network. Overall, we provide valuable insights about the grounded compositional generalization task and the behaviour of transformers on it, which would be useful for researchers working in this area. | null | null | 10.18653/v1/2022.emnlp-main.41 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,171 |
inproceedings | ushio-etal-2022-generative | Generative Language Models for Paragraph-Level Question Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.42/ | Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 670--688 | Powerful generative models have led to recent progress in question generation (QG). However, it is difficult to measure advances in QG research since there are no standardized resources that allow a uniform comparison among approaches. In this paper, we introduce QG-Bench, a multilingual and multidomain benchmark for QG that unifies existing question answering datasets by converting them to a standard QG setting. It includes general-purpose datasets such as SQuAD for English, datasets from ten domains and two styles, as well as datasets in eight different languages. Using QG-Bench as a reference, we perform an extensive analysis of the capabilities of language models for the task. First, we propose robust QG baselines based on fine-tuning generative language models. Then, we complement automatic evaluation based on standard metrics with an extensive manual evaluation, which in turn sheds light on the difficulty of evaluating QG models. Finally, we analyse both the domain adaptability of these models as well as the effectiveness of multilingual models in languages other than English.QG-Bench is released along with the fine-tuned models presented in the paper (https://github.com/asahi417/lm-question-generation), which are also available as a demo (https://autoqg.net/). | null | null | 10.18653/v1/2022.emnlp-main.42 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,172 |
inproceedings | zhang-etal-2022-unified | A Unified Encoder-Decoder Framework with Entity Memory | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.43/ | Zhang, Zhihan and Yu, Wenhao and Zhu, Chenguang and Jiang, Meng | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 689--705 | Entities, as important carriers of real-world knowledge, play a key role in many NLP tasks.We focus on incorporating entity knowledge into an encoder-decoder framework for informative text generation. Existing approaches tried to index, retrieve, and read external documents as evidence, but they suffered from a large computational overhead. In this work, we propose an encoder-decoder framework with an entity memory, namely EDMem. The entity knowledge is stored in the memory as latent representations, and the memory is pre-trained on Wikipedia along with encoder-decoder parameters. To precisely generate entity names, we design three decoding methods to constrain entity generation by linking entities in the memory. EDMem is a unified framework that can be used on various entity-intensive question answering and generation tasks. Extensive experimental results show that EDMem outperforms both memory-based auto-encoder models and non-memory encoder-decoder models. | null | null | 10.18653/v1/2022.emnlp-main.43 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,173 |
inproceedings | aldarrab-may-2022-segmenting | Segmenting Numerical Substitution Ciphers | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.44/ | Aldarrab, Nada and May, Jonathan | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 706--714 | Deciphering historical substitution ciphers is a challenging problem. Example problems that have been previously studied include detecting cipher type, detecting plaintext language, and acquiring the substitution key for segmented ciphers. However, attacking unsegmented ciphers is still a challenging task. Segmentation (i.e. finding substitution units) is essential for cracking those ciphers. In this work, we propose the first automatic methods to segment those ciphers using Byte Pair Encoding (BPE) and unigram language models. Our methods achieve an average segmentation error of 2{\%} on 100 randomly-generated monoalphabetic ciphers and 27{\%} on 3 real historical homophonic ciphers. We also propose a method for solving non-deterministic ciphers with existing keys using a lattice and a pretrained language model. Our method leads to the full solution of the IA cipher; a real historical cipher that has not been fully solved until this work. | null | null | 10.18653/v1/2022.emnlp-main.44 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,174 |
inproceedings | thapliyal-etal-2022-crossmodal | Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.45/ | Thapliyal, Ashish V. and Pont Tuset, Jordi and Chen, Xi and Soricut, Radu | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 715--729 | Research in massively multilingual image captioning has been severely hampered by a lack of high-quality evaluation datasets. In this paper we present the Crossmodal-3600 dataset (XM3600 in short), a geographically diverse set of 3600 images annotated with human-generated reference captions in 36 languages. The images were selected from across the world, covering regions where the 36 languages are spoken, and annotated with captions that achieve consistency in terms of style across all languages, while avoiding annotation artifacts due to direct translation. We apply this benchmark to model selection for massively multilingual image captioning models, and show superior correlation results with human evaluations when using XM3600 as golden references for automatic metrics. | null | null | 10.18653/v1/2022.emnlp-main.45 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,175 |
inproceedings | zhuang-etal-2022-resel | {R}e{S}el: N-ary Relation Extraction from Scientific Text and Tables by Learning to Retrieve and Select | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.46/ | Zhuang, Yuchen and Li, Yinghao and Zhang, Junyang and Yu, Yue and Mou, Yingjun and Chen, Xiang and Song, Le and Zhang, Chao | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 730--744 | We study the problem of extracting N-ary relation tuples from scientific articles. This task is challenging because the target knowledge tuples can reside in multiple parts and modalities of the document. Our proposed method ReSel decomposes this task into a two-stage procedure that first retrieves the most relevant paragraph/table and then selects the target entity from the retrieved component. For the high-level retrieval stage, ReSel designs a simple and effective feature set, which captures multi-level lexical and semantic similarities between the query and components. For the low-level selection stage, ReSel designs a cross-modal entity correlation graph along with a multi-view architecture, which models both semantic and document-structural relations between entities. Our experiments on three scientific information extraction datasets show that ReSel outperforms state-of-the-art baselines significantly. | null | null | 10.18653/v1/2022.emnlp-main.46 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,176 |
inproceedings | yang-etal-2022-gammae | {G}amma{E}: Gamma Embeddings for Logical Queries on Knowledge Graphs | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.47/ | Yang, Dong and Qing, Peijun and Li, Yang and Lu, Haonan and Lin, Xiaodong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 745--760 | Embedding knowledge graphs (KGs) for multi-hop logical reasoning is a challenging problem due to massive and complicated structures in many KGs. Recently, many promising works projected entities and queries into a geometric space to efficiently find answers. However, it remains challenging to model the negation and union operator. The negation operator has no strict boundaries, which generates overlapped embeddings and leads to obtaining ambiguous answers. An additional limitation is that the union operator is non-closure, which undermines the model to handle a series of union operators. To address these problems, we propose a novel probabilistic embedding model, namely Gamma Embeddings (GammaE), for encoding entities and queries to answer different types of FOL queries on KGs. We utilize the linear property and strong boundary support of the Gamma distribution to capture more features of entities and queries, which dramatically reduces model uncertainty. Furthermore, GammaE implements the Gamma mixture method to design the closed union operator. The performance of GammaE is validated on three large logical query datasets. Experimental results show that GammaE significantly outperforms state-of-the-art models on public benchmarks. | null | null | 10.18653/v1/2022.emnlp-main.47 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,177 |
inproceedings | pi-etal-2022-reasoning | Reasoning Like Program Executors | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.48/ | Pi, Xinyu and Liu, Qian and Chen, Bei and Ziyadi, Morteza and Lin, Zeqi and Fu, Qiang and Gao, Yan and Lou, Jian-Guang and Chen, Weizhu | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 761--779 | Reasoning over natural language is a long-standing goal for the research community. However, studies have shown that existing language models are inadequate in reasoning. To address the issue, we present POET, a novel reasoning pre-training paradigm. Through pre-training language models with programs and their execution results, POET empowers language models to harvest the reasoning knowledge possessed by program executors via a data-driven approach. POET is conceptually simple and can be instantiated by different kinds of program executors. In this paper, we showcase two simple instances POET-Math and POET-Logic, in addition to a complex instance, POET-SQL. Experimental results on six benchmarks demonstrate that POET can significantly boost model performance in natural language reasoning, such as numerical reasoning, logical reasoning, and multi-hop reasoning. POET opens a new gate on reasoning-enhancement pre-training, and we hope our analysis would shed light on the future research of reasoning like program executors. | null | null | 10.18653/v1/2022.emnlp-main.48 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,178 |
inproceedings | bansal-etal-2022-sem | {SEM}-F1: an Automatic Way for Semantic Evaluation of Multi-Narrative Overlap Summaries at Scale | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.49/ | Bansal, Naman and Akter, Mousumi and Karmaker Santu, Shubhra Kanti | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 780--792 | Recent work has introduced an important yet relatively under-explored NLP task called Semantic Overlap Summarization (SOS) that entails generating a summary from multiple alternative narratives which conveys the common information provided by those narratives. Previous work also published a benchmark dataset for this task by collecting 2,925 alternative narrative pairs from the web and manually annotating 411 different reference summaries by engaging human annotators. In this paper, we exclusively focus on the automated evaluation of the SOS task using the benchmark dataset. More specifically, we first use the popular ROUGE metric from text-summarization literature and conduct a systematic study to evaluate the SOS task. Our experiments discover that ROUGE is not suitable for this novel task and therefore, we propose a new sentence-level precision-recall style automated evaluation metric, called SEM-F1 (Semantic F1). It is inspired by the benefits of the sentence-wise annotation technique using overlap labels reported by the previous work. Our experiments show that the proposed SEM-F1 metric yields a higher correlation with human judgment and higher inter-rater agreement compared to the ROUGE metric. | null | null | 10.18653/v1/2022.emnlp-main.49 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,179 |
inproceedings | chen-etal-2022-inducer | Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.50/ | Chen, Yifan and Hazarika, Devamanyu and Namazifar, Mahdi and Liu, Yang and Jin, Di and Hakkani-Tur, Dilek | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 793--808 | Prefix-tuning, or more generally continuous prompt tuning, has become an essential paradigm of parameter-efficient transfer learning. Using a large pre-trained language model (PLM), prefix-tuning can obtain strong performance by training only a small portion of parameters. In this paper, we propose to understand and further develop prefix-tuning through the kernel lens. Specifically, we make an analogy between \textit{prefixes} and \textit{inducing variables} in kernel methods and hypothesize that \textit{prefixes} serving as \textit{inducing variables} would improve their overall mechanism. From the kernel estimator perspective, we suggest a new variant of prefix-tuning{---}\textit{inducer-tuning}, which shares the exact mechanism as prefix-tuning while leveraging the residual form found in adapter-tuning. This mitigates the initialization issue in prefix-tuning. Through comprehensive empirical experiments on natural language understanding and generation tasks, we demonstrate that inducer-tuning can close the performance gap between prefix-tuning and fine-tuning. | null | null | 10.18653/v1/2022.emnlp-main.50 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,180 |
inproceedings | mathur-etal-2022-docinfer | {D}oc{I}nfer: Document-level Natural Language Inference using Optimal Evidence Selection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.51/ | Mathur, Puneet and Kunapuli, Gautam and Bhat, Riyaz and Shrivastava, Manish and Manocha, Dinesh and Singh, Maneesh | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 809--824 | We present DocInfer - a novel, end-to-end Document-level Natural Language Inference model that builds a hierarchical document graph enriched through inter-sentence relations (topical, entity-based, concept-based), performs paragraph pruning using the novel SubGraph Pooling layer, followed by optimal evidence selection based on REINFORCE algorithm to identify the most important context sentences for a given hypothesis. Our evidence selection mechanism allows it to transcend the input length limitation of modern BERT-like Transformer models while presenting the entire evidence together for inferential reasoning. We show this is an important property needed to reason on large documents where the evidence may be fragmented and located arbitrarily far from each other. Extensive experiments on popular corpora - DocNLI, ContractNLI, and ConTRoL datasets, and our new proposed dataset called CaseHoldNLI on the task of legal judicial reasoning, demonstrate significant performance gains of 8-12{\%} over SOTA methods. Our ablation studies validate the impact of our model. Performance improvement of 3-6{\%} on annotation-scarce downstream tasks of fact verification, multiple-choice QA, and contract clause retrieval demonstrates the usefulness of DocInfer beyond primary NLI tasks. | null | null | 10.18653/v1/2022.emnlp-main.51 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,181 |
inproceedings | mao-etal-2022-lightea | {L}ight{EA}: A Scalable, Robust, and Interpretable Entity Alignment Framework via Three-view Label Propagation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.52/ | Mao, Xin and Wang, Wenting and Wu, Yuanbin and Lan, Man | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 825--838 | Entity Alignment (EA) aims to find equivalent entity pairs between KGs, which is the core step to bridging and integrating multi-source KGs. In this paper, we argue that existing complex EA methods inevitably inherit the inborn defects from their neural network lineage: poor interpretability and weak scalability. Inspired by recent studies, we reinvent the classical Label Propagation algorithm to effectively run on KGs and propose a neural-free EA framework {---} LightEA, consisting of three efficient components: (i) Random Orthogonal Label Generation, (ii) Three-view Label Propagation, and (iii) Sparse Sinkhorn Operation.According to the extensive experiments on public datasets, LightEA has impressive scalability, robustness, and interpretability. With a mere tenth of time consumption, LightEA achieves comparable results to state-of-the-art methods across all datasets and even surpasses them on many. Besides, due to the computational process of LightEA being entirely linear, we could trace the propagation process at each step and clearly explain how the entities are aligned. | null | null | 10.18653/v1/2022.emnlp-main.52 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,182 |
inproceedings | he-etal-2022-metric | Metric-guided Distillation: Distilling Knowledge from the Metric to Ranker and Retriever for Generative Commonsense Reasoning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.53/ | He, Xingwei and Gong, Yeyun and Jin, A-Long and Qi, Weizhen and Zhang, Hang and Jiao, Jian and Zhou, Bartuer and Cheng, Biao and Yiu, Sm and Duan, Nan | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 839--852 | Commonsense generation aims to generate a realistic sentence describing a daily scene under the given concepts, which is very challenging, since it requires models to have relational reasoning and compositional generalization capabilities. Previous work focuses on retrieving prototype sentences for the provided concepts to assist generation. They first use a sparse retriever to retrieve candidate sentences, then re-rank the candidates with a ranker. However, the candidates returned by their ranker may not be the most relevant sentences, since the ranker treats all candidates equally without considering their relevance to the reference sentences of the given concepts. Another problem is that re-ranking is very expensive, but only using retrievers will seriously degrade the performance of their generation models. To solve these problems, we propose the metric distillation rule to distill knowledge from the metric (e.g., BLEU) to the ranker. We further transfer the critical knowledge summarized by the distilled ranker to the retriever. In this way, the relevance scores of candidate sentences predicted by the ranker and retriever will be more consistent with their quality measured by the metric. Experimental results on the CommonGen benchmark verify the effectiveness of our proposed method: (1) Our generation model with the distilled ranker achieves a new state-of-the-art result. (2) Our generation model with the distilled retriever even surpasses the previous SOTA. | null | null | 10.18653/v1/2022.emnlp-main.53 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,183 |
inproceedings | qiu-etal-2022-efficient | Efficient Document Retrieval by End-to-End Refining and Quantizing {BERT} Embedding with Contrastive Product Quantization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.54/ | Qiu, Zexuan and Su, Qinliang and Yu, Jianxing and Si, Shijing | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 853--863 | Efficient document retrieval heavily relies on the technique of semantic hashing, which learns a binary code for every document and employs Hamming distance to evaluate document distances. However, existing semantic hashing methods are mostly established on outdated TFIDF features, which obviously do not contain lots of important semantic information about documents. Furthermore, the Hamming distance can only be equal to one of several integer values, significantly limiting its representational ability for document distances. To address these issues, in this paper, we propose to leverage BERT embeddings to perform efficient retrieval based on the product quantization technique, which will assign for every document a real-valued codeword from the codebook, instead of a binary code as in semantic hashing. Specifically, we first transform the original BERT embeddings via a learnable mapping and feed the transformed embedding into a probabilistic product quantization module to output the assigned codeword. The refining and quantizing modules can be optimized in an end-to-end manner by minimizing the probabilistic contrastive loss. A mutual information maximization based method is further proposed to improve the representativeness of codewords, so that documents can be quantized more accurately. Extensive experiments conducted on three benchmarks demonstrate that our proposed method significantly outperforms current state-of-the-art baselines. | null | null | 10.18653/v1/2022.emnlp-main.54 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,184 |
inproceedings | zhang-etal-2022-curriculum | Curriculum Knowledge Distillation for Emoji-supervised Cross-lingual Sentiment Analysis | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.55/ | Zhang, Jianyang and Liang, Tao and Wan, Mingyang and Yang, Guowu and Lv, Fengmao | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 864--875 | Existing sentiment analysis models have achieved great advances with the help of sufficient sentiment annotations. Unfortunately, many languages do not have sufficient sentiment corpus. To this end, recent studies have proposed cross-lingual sentiment analysis to transfer sentiment analysis models from resource-rich languages to low-resource languages. However, these studies either rely on external cross-lingual supervision (e.g., parallel corpora and translation model), or are limited by the cross-lingual gaps. In this work, based on the intuitive assumption that the relationships between emojis and sentiments are consistent across different languages, we investigate transferring sentiment knowledge across languages with the help of emojis. To this end, we propose a novel cross-lingual sentiment analysis approach dubbed Curriculum Knowledge Distiller (CKD). The core idea of CKD is to use emojis to bridge the source and target languages. Note that, compared with texts, emojis are more transferable, but cannot reveal the precise sentiment. Thus, we distill multiple Intermediate Sentiment Classifiers (ISC) on source language corpus with emojis to get ISCs with different attention weights of texts. To transfer them into the target language, we distill ISCs into the Target Language Sentiment Classifier (TSC) following the curriculum learning mechanism. In this way, TSC can learn delicate sentiment knowledge, meanwhile, avoid being affected by cross-lingual gaps. Experimental results on five cross-lingual benchmarks clearly verify the effectiveness of our approach. | null | null | 10.18653/v1/2022.emnlp-main.55 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,185 |
inproceedings | xie-etal-2022-correctable | Correctable-{DST}: Mitigating Historical Context Mismatch between Training and Inference for Improved Dialogue State Tracking | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.emnlp-main.56/ | Xie, Hongyan and Su, Haoxiang and Song, Shuangyong and Huang, Hao and Zou, Bo and Deng, Kun and Lin, Jianghua and Zhang, Zhihui and He, Xiaodong | Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing | 876--889 | Recently proposed dialogue state tracking (DST) approaches predict the dialogue state of a target turn sequentially based on the previous dialogue state. During the training time, the ground-truth previous dialogue state is utilized as the historical context. However, only the previously predicted dialogue state can be used in inference. This discrepancy might lead to error propagation, i.e., mistakes made by the model in the current turn are likely to be carried over to the following turns.To solve this problem, we propose Correctable Dialogue State Tracking (Correctable-DST). Specifically, it consists of three stages: (1) a Predictive State Simulator is exploited to generate a previously {\textquotedblleft}predicted{\textquotedblright} dialogue state based on the ground-truth previous dialogue state during training; (2) a Slot Detector is proposed to determine the slots with an incorrect value in the previously {\textquotedblleft}predicted{\textquotedblright} state and the slots whose values are to be updated in the current turn; (3) a State Generator takes the name of the above-selected slots as a prompt to generate the current state.Empirical results show that our approach achieves 67.51{\%}, 68.24{\%}, 70.30{\%}, 71.38{\%}, and 81.27{\%} joint goal accuracy on MultiWOZ 2.0-2.4 datasets, respectively, and achieves a new state-of-the-art performance with significant improvements. | null | null | 10.18653/v1/2022.emnlp-main.56 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,186 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.