entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | zhu-etal-2022-answer | Answer Quality Aware Aggregation for Extractive {QA} Crowdsourcing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.457/ | Zhu, Peide and Wang, Zhen and Hauff, Claudia and Yang, Jie and Anand, Avishek | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6147--6159 | Quality control is essential for creating extractive question answering (EQA) datasets via crowdsourcing. Aggregation across answers, i.e. word spans within passages annotated, by different crowd workers is one major focus for ensuring its quality. However, crowd workers cannot reach a consensus on a considerable portion of questions. We introduce a simple yet effective answer aggregation method that takes into account the relations among the answer, question, and context passage. We evaluate answer quality from both the view of question answering model to determine how confident the QA model is about each answer and the view of the answer verification model to determine whether the answer is correct. Then we compute aggregation scores with each answer`s quality and its contextual embedding produced by pre-trained language models. The experiments on a large real crowdsourced EQA dataset show that our framework outperforms baselines by around 16{\%} on precision and effectively conduct answer aggregation for extractive QA task. | null | null | 10.18653/v1/2022.findings-emnlp.457 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,969 |
inproceedings | wang-etal-2022-search | Search to Pass Messages for Temporal Knowledge Graph Completion | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.458/ | Wang, Zhen and Du, Haotong and Yao, Quanming and Li, Xuelong | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6160--6172 | Completing missing facts is a fundamental task for temporal knowledge graphs (TKGs).Recently, graph neural network (GNN) based methods, which can simultaneously explore topological and temporal information, have become the state-of-the-art (SOTA) to complete TKGs. However, these studies are based on hand-designed architectures and fail to explore the diverse topological and temporal properties of TKG.To address this issue, we propose to use neural architecture search (NAS) to design data-specific message passing architecture for TKG completion.In particular, we develop a generalized framework to explore topological and temporal information in TKGs.Based on this framework, we design an expressive search space to fully capture various properties of different TKGs. Meanwhile, we adopt a search algorithm, which trains a supernet structure by sampling single path for efficient search with less cost.We further conduct extensive experiments on three benchmark datasets. The results show that the searched architectures by our method achieve the SOTA performances.Besides, the searched models can also implicitly reveal diverse properties in different TKGs.Our code is released in https://github.com/striderdu/SPA. | null | null | 10.18653/v1/2022.findings-emnlp.458 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,970 |
inproceedings | du-etal-2022-code | Code Vulnerability Detection via Nearest Neighbor Mechanism | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.459/ | Du, Qianjin and Kuang, Xiaohui and Zhao, Gang | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6173--6178 | Code vulnerability detection is a fundamental and challenging task in the software security field. Existing research works aim to learn semantic information from the source code by utilizing NLP technologies. However, in vulnerability detection tasks, some vulnerable samples are very similar to non-vulnerable samples, which are difficult to identify. To address this issue and improve detection performance, we introduce the $k$-nearest neighbor mechanism which retrieves multiple neighbor samples and utilizes label information of retrieved neighbor samples to provide help for model predictions. Besides, we use supervised contrastive learning to make the model learn the discriminative representation and ensure that label information of retrieved neighbor samples is as consistent as possible with the label information of testing samples. Extensive experiments show that our method can achieve obvious performance improvements compared to baseline models. | null | null | 10.18653/v1/2022.findings-emnlp.459 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,971 |
inproceedings | ye-etal-2022-robust | Robust Question Answering against Distribution Shifts with Test-Time Adaption: An Empirical Study | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.460/ | Ye, Hai and Ding, Yuyang and Li, Juntao and Ng, Hwee Tou | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6179--6192 | A deployed question answering (QA) model can easily fail when the test data has a distribution shift compared to the training data. Robustness tuning (RT) methods have been widely studied to enhance model robustness against distribution shifts before model deployment. However, can we improve a model after deployment? To answer this question, we evaluate test-time adaptation (TTA) to improve a model after deployment. We first introduce ColdQA, a unified evaluation benchmark for robust QA against text corruption and changes in language and domain. We then evaluate previous TTA methods on ColdQA and compare them to RT methods. We also propose a novel TTA method called online imitation learning (OIL). Through extensive experiments, we find that TTA is comparable to RT methods, and applying TTA after RT can significantly boost the performance on ColdQA. Our proposed OIL improves TTA to be more robust to variation in hyper-parameters and test distributions over time. | null | null | 10.18653/v1/2022.findings-emnlp.460 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,972 |
inproceedings | liu-etal-2022-paramac | {P}ara{M}ac: A General Unsupervised Paraphrase Generation Framework Leveraging Semantic Constraints and Diversifying Mechanisms | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.461/ | Liu, Jinxin and Shi, Jiaxin and Qi, Ji and Hou, Lei and Li, Juanzi and Tian, Qi | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6193--6206 | Paraphrase generation reflects the ability to understand the meaning from the language surface form and rephrase it to other expressions. Recent paraphrase generation works have paid attention to unsupervised approaches based on Pre-trained Language Models (PLMs) to avoid heavy reliance on parallel data by utilizing PLMs' generation ability. However, the generated pairs of existing unsupervised methods are usually weak either in semantic equivalence or expression diversity. In this paper, we present a novel unsupervised paraphrase generation framework called Paraphrase Machine. By employing multi-aspect equivalence constraints and multi-granularity diversifying mechanisms, Paraphrase Machine is able to achieve good semantic equivalence and expressive diversity, producing a high-quality unsupervised paraphrase dataset. Based on this dataset, we train a general paraphrase model, which can be directly applied to rewrite the input sentence of various domains without any fine-tuning, and achieves substantial gains of 9.1{\%} and 3.3{\%} absolutely in BLEU score over previous SOTA on Quora and MSCOCO. By further fine-tuning our model with domain-specific training sets, the improvement can be increased to even 18.0{\%} and 4.6{\%}. Most importantly, by applying it to language understanding and generation tasks under the low-resource setting, we demonstrate that our model can serve as a universal data augmentor to boost the few-shot performance (e.g., average 2.0{\%} gain on GLUE). | null | null | 10.18653/v1/2022.findings-emnlp.461 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,973 |
inproceedings | wu-etal-2022-semi | Semi-supervised New Slot Discovery with Incremental Clustering | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.462/ | Wu, Yuxia and Liao, Lizi and Qian, Xueming and Chua, Tat-Seng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6207--6218 | Discovering new slots is critical to the success of dialogue systems. Most existing methods rely on automatic slot induction in unsupervised fashion or perform domain adaptation across zero or few-shot scenarios. They have difficulties in providing high-quality supervised signals to learn clustering-friendly features, and are limited in effectively transferring the prior knowledge from known slots to new slots. In this work, we propose a Semi-supervised Incremental Clustering method (SIC), to discover new slots with the aid of existing linguistic annotation models and limited known slot data. Specifically, we harvest slot value candidates with NLP model cues and innovatively formulate the slot discovery task under an incremental clustering framework. The model gradually calibrate slot representations under the supervision of generated pseudo-labels, and automatically learns to terminate when no more salient slot remains. Our thorough evaluation on five public datasets demonstrates that it significantly outperforms state-of-the-art models. | null | null | 10.18653/v1/2022.findings-emnlp.462 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,974 |
inproceedings | cheng-zhang-2022-con | Con-{NAT}: Contrastive Non-autoregressive Neural Machine Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.463/ | Cheng, Hao and Zhang, Zhihua | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6219--6231 | Inspired by the success of contrastive learning in natural language processing, we incorporate contrastive learning into the conditional masked language model which is extensively used in non-autoregressive neural machine translation (NAT). Accordingly, we propose a Contrastive Non-autoregressive Neural Machine Translation (Con-NAT) model. Con-NAT optimizes the similarity of several different representations of the same token in the same sentence. We propose two methods to obtain various representations: Contrastive Common Mask and Contrastive Dropout. Positive pairs are various different representations of the same token, while negative pairs are representations of different tokens. In the feature space, the model with contrastive loss pulls positive pairs together and pushes negative pairs away. We conduct extensive experiments on six translation directions with different data sizes. The results demonstrate that Con-NAT showed a consistent and significant improvement in fully and iterative NAT. Con-NAT is state-of-the-art on WMT`16 Ro-En (34.18 BLEU). | null | null | 10.18653/v1/2022.findings-emnlp.463 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,975 |
inproceedings | wang-etal-2022-improved | Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.464/ | Wang, Chenglong and Lu, Yi and Mu, Yongyu and Hu, Yimin and Xiao, Tong and Zhu, Jingbo | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6232--6244 | Knowledge distillation addresses the problem of transferring knowledge from a teacher model to a student model.In this process, we typically have multiple types of knowledge extracted from the teacher model.The problem is to make full use of them to train the student model.Our preliminary study shows that: (1) not all of the knowledge is necessary for learning a good student model, and (2) knowledge distillation can benefit from certain knowledge at different training steps.In response to these, we propose an actor-critic approach to selecting appropriate knowledge to transfer during the process of knowledge distillation.In addition, we offer a refinement of the training algorithm to ease the computational burden.Experimental results on the GLUE datasets show that our method outperforms several strong knowledge distillation baselines significantly. | null | null | 10.18653/v1/2022.findings-emnlp.464 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,976 |
inproceedings | qi-etal-2022-syntactically | Syntactically Robust Training on Partially-Observed Data for Open Information Extraction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.465/ | Qi, Ji and Chen, Yuxiang and Hou, Lei and Li, Juanzi and Xu, Bin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6245--6257 | Open Information Extraction models have shown promising results with sufficient supervision. However, these models face a fundamental challenge that the syntactic distribution of training data is partially observable in comparison to the real world. In this paper, we propose a syntactically robust training framework that enables models to be trained on a syntactic-abundant distribution based on diverse paraphrase generation. To tackle the intrinsic problem of knowledge deformation of paraphrasing, two algorithms based on semantic similarity matching and syntactic tree walking are used to restore the expressionally transformed knowledge. The training framework can be generally applied to other syntactic partial observable domains. Based on the proposed framework, we build a new evaluation set called CaRB-AutoPara, a syntactically diverse dataset consistent with the real-world setting for validating the robustness of the models. Experiments including a thorough analysis show that the performance of the model degrades with the increase of the difference in syntactic distribution, while our framework gives a robust boundary. | null | null | 10.18653/v1/2022.findings-emnlp.465 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,977 |
inproceedings | maheshwari-etal-2022-benchmark | A Benchmark and Dataset for Post-{OCR} text correction in {S}anskrit | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.466/ | Maheshwari, Ayush and Singh, Nikhil and Krishna, Amrith and Ramakrishnan, Ganesh | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6258--6265 | Sanskrit is a classical language with about 30 million extant manuscripts fit for digitisation, available in written, printed or scanned-image forms. However, it is still considered to be a low-resource language when it comes to available digital resources. In this work, we release a post-OCR text correction dataset containing around 218,000 sentences, with 1.5 million words, from 30 different books. Texts in Sanskrit are known to be diverse in terms of their linguistic and stylistic usage since Sanskrit was the {\textquoteleft}lingua francua' for discourse in the Indian subcontinent for about 3 millennia. Keeping this in mind, we release a multi-domain dataset, from areas as diverse as astronomy, medicine and mathematics, with some of them as old as 18 centuries. Further, we release multiple strong baselines as benchmarks for the task, based on pre-trained Seq2Seq language models. We find that our best-performing model, consisting of byte level tokenization in conjunction with phonetic encoding (Byt5+SLP1), yields a 23{\%} point increase over the OCR output in terms of word and character error rates. Moreover, we perform extensive experiments in evaluating these models on their performance and analyse common causes of mispredictions both at the graphemic and lexical levels. Our code and dataset is publicly available at https://github.com/ayushbits/pe-ocr-sanskrit. | null | null | 10.18653/v1/2022.findings-emnlp.466 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,978 |
inproceedings | zhao-etal-2022-knowledge | Knowledge-Enhanced Self-Supervised Prototypical Network for Few-Shot Event Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.467/ | Zhao, Kailin and Jin, Xiaolong and Bai, Long and Guo, Jiafeng and Cheng, Xueqi | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6266--6275 | Prototypical network based joint methods have attracted much attention in few-shot event detection, which carry out event detection in a unified sequence tagging framework. However, these methods suffer from the inaccurate prototype representation problem, due to two main reasons: the number of instances for calculating prototypes is limited; And, they do not well capture the relationships among event prototypes. To deal with this problem, we propose a Knowledge-Enhanced self-supervised Prototypical Network, called KE-PN, for few-shot event detection. KE-PN adopts hybrid rules, which can automatically align event types to an external knowledge base, i.e., FrameNet, to obtain more instances.It proposes a self-supervised learning method to filter out noisy data from enhanced instances. KE-PN is further equipped with an auxiliary event type relationship classification module, which injects the relationship information into representations of event prototypes. Extensive experiments on three benchmark datasets, i.e., FewEvent, MAVEN, and ACE2005 demonstrate the state-of-the-art performance of KE-PN. | null | null | 10.18653/v1/2022.findings-emnlp.467 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,979 |
inproceedings | hu-etal-2022-varmae | {V}ar{MAE}: Pre-training of Variational Masked Autoencoder for Domain-adaptive Language Understanding | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.468/ | Hu, Dou and Hou, Xiaolong and Du, Xiyang and Zhou, Mengyuan and Jiang, Lianxin and Mo, Yang and Shi, Xiaofeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6276--6286 | Pre-trained language models have been widely applied to standard benchmarks. Due to the flexibility of natural language, the available resources in a certain domain can be restricted to support obtaining precise representation. To address this issue, we propose a novel Transformer-based language model named VarMAE for domain-adaptive language understanding. Under the masked autoencoding objective, we design a context uncertainty learning module to encode the token`s context into a smooth latent distribution. The module can produce diverse and well-formed contextual representations. Experiments on science- and finance-domain NLU tasks demonstrate that VarMAE can be efficiently adapted to new domains with limited resources. | null | null | 10.18653/v1/2022.findings-emnlp.468 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,980 |
inproceedings | lu-etal-2022-exploring | Exploring Methods for Building Dialects-{M}andarin Code-Mixing Corpora: A Case Study in {T}aiwanese Hokkien | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.469/ | Lu, Sin-En and Lu, Bo-Han and Lu, Chao-Yi and Tsai, Richard Tzong-Han | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6287--6305 | In natural language processing (NLP), code-mixing (CM) is a challenging task, especially when the mixed languages include dialects. In Southeast Asian countries such as Singapore, Indonesia, and Malaysia, Hokkien-Mandarin is the most widespread code-mixed language pair among Chinese immigrants, and it is also common in Taiwan. However, dialects such as Hokkien often have a scarcity of resources and the lack of an official writing system, limiting the development of dialect CM research. In this paper, we propose a method to construct a Hokkien-Mandarin CM dataset to mitigate the limitation, overcome the morphological issue under the Sino-Tibetan language family, and offer an efficient Hokkien word segmentation method through a linguistics-based toolkit. Furthermore, we use our proposed dataset and employ transfer learning to train the XLM (cross-lingual language model) for translation tasks. To fit the code-mixing scenario, we adapt XLM slightly. We found that by using linguistic knowledge, rules, and language tags, the model produces good results on CM data translation while maintaining monolingual translation quality. | null | null | 10.18653/v1/2022.findings-emnlp.469 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,981 |
inproceedings | hu-etal-2022-recurrence | Recurrence Boosts Diversity! Revisiting Recurrent Latent Variable in Transformer-Based Variational {A}uto{E}ncoder for Diverse Text Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.470/ | Hu, Jinyi and Yi, Xiaoyuan and Li, Wenhao and Sun, Maosong and Xie, Xing | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6306--6320 | Variational Auto-Encoder (VAE) has been widely adopted in text generation. Among many variants, recurrent VAE learns token-wise latent variables with each conditioned on the preceding ones, which captures sequential variability better in the era of RNN. However, it is unclear how to incorporate such recurrent dynamics into the recently dominant Transformer due to its parallelism. In this work, we propose TRACE, a Transformer-based recurrent VAE structure. TRACE imposes recurrence on segment-wise latent variables with arbitrarily separated text segments and constructs the posterior distribution with residual parameterization. Besides, we design an acceleration method by approximating idempotent matrices, which allows parallelism while maintaining the conditional dependence of latent variables. We demonstrate that TRACE could deduce a non-zero lower bound of the KL term and enhance the entanglement of each segment and preceding latent variables, providing a theoretical guarantee of generation diversity. Experiments on two unconditional and one conditional generation task show that TRACE achieves significantly improved diversity while maintaining satisfactory generation quality. | null | null | 10.18653/v1/2022.findings-emnlp.470 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,982 |
inproceedings | sawhney-etal-2022-tweet | Tweet Based Reach Aware Temporal Attention Network for {NFT} Valuation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.471/ | Sawhney, Ramit and Thakkar, Megh and Soun, Ritesh and Neerkaje, Atula and Sharma, Vasu and Guhathakurta, Dipanwita and Chava, Sudheer | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6321--6332 | Non-Fungible Tokens (NFTs) are a relatively unexplored class of assets. Designing strategies to forecast NFT trends is an intricate task due to its extremely volatile nature. The market is largely driven by public sentiment and {\textquotedblleft}hype{\textquotedblright}, which in turn has a high correlation with conversations taking place on social media platforms like Twitter. Prior work done for modelling stock market data does not take into account the extent of impact certain highly influential tweets and their authors can have on the market. Building on these limitations and the nature of the NFT market, we propose a novel reach-aware temporal learning approach to make predictions for forecasting future trends in the NFT market. We perform experiments on a new dataset consisting of over 1.3 million tweets and 180 thousand NFT transactions spanning over 15 NFT collections curated by us. Our model (TA-NFT) outperforms other state-of-the-art methods by an average of 36{\%}. Through extensive quantitative and ablative analysis, we demonstrate the ability of our approach as a practical method for predicting NFT trends. | null | null | 10.18653/v1/2022.findings-emnlp.471 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,983 |
inproceedings | oba-etal-2022-entity | Entity Embedding Completion for Wide-Coverage Entity Disambiguation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.472/ | Oba, Daisuke and Yamada, Ikuya and Yoshinaga, Naoki and Toyoda, Masashi | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6333--6344 | Entity disambiguation (ED) is typically solved by learning to classify a given mention into one of the entities in the model`s entity vocabulary by referring to their embeddings. However, this approach cannot address mentions of entities that are not covered by the entity vocabulary. Aiming to enhance the applicability of ED models, we propose a method of extending a state-of-the-art ED model by dynamically computing embeddings of out-of-vocabulary entities. Specifically, our method computes embeddings from entity descriptions and mention contexts. Experiments with standard benchmark datasets show that the extended model performs comparable to or better than existing models whose entity embeddings are trained for all candidate entities as well as embedding-free models. We release our source code and model checkpoints at https://github.com/studio-ousia/steel. | null | null | 10.18653/v1/2022.findings-emnlp.472 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,984 |
inproceedings | zhao-etal-2022-entity | Entity-level Interaction via Heterogeneous Graph for Multimodal Named Entity Recognition | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.473/ | Zhao, Gang and Dong, Guanting and Shi, Yidong and Yan, Haolong and Xu, Weiran and Li, Si | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6345--6350 | Multimodal Named Entity Recognition (MNER) faces two specific challenges: 1) How to capture useful entity-related visual information. 2) How to alleviate the interference of visual noise. Previous works have gained progress by improving interacting mechanisms or seeking for better visual features. However, existing methods neglect the integrity of entity semantics and conduct cross-modal interaction at token-level, which cuts apart the semantics of entities and makes non-entity tokens easily interfered with by irrelevant visual noise. Thus in this paper, we propose an end-to-end heterogeneous Graph-based Entity-level Interacting model (GEI) for MNER. GEI first utilizes a span detection subtask to obtain entity representations, which serve as the bridge between two modalities. Then, the heterogeneous graph interacting network interacts entity with object nodes to capture entity-related visual information, and fuses it into only entity-associated tokens to rid non-entity tokens of the visual noise. Experiments on two widely used datasets demonstrate the effectiveness of our method. Our code will be available at https://github.com/GangZhao98/GEI. | null | null | 10.18653/v1/2022.findings-emnlp.473 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,985 |
inproceedings | manzoor-etal-2022-status | Status Biases in Deliberation Online: Evidence from a Randomized Experiment on {C}hange{M}y{V}iew | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.474/ | Manzoor, Emaad and Jo, Yohan and Montgomery, Alan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6351--6363 | Status is widely used to incentivize user engagement online. However, visible status indicators could inadvertently bias online deliberation to favor high-status users. In this work, we design and deploy a randomized experiment on the ChangeMyView platform to quantify status biases in deliberation online. We find strong evidence of status bias: hiding status on ChangeMyView increases the persuasion rate of moderate-status users by 84{\%} and decreases the persuasion rate of high-status users by 41{\%} relative to the control group. We also find that the persuasive power of status is moderated by verbosity, suggesting that status is used as an information-processing heuristic under cognitive load. Finally, we find that a user`s status influences the argumentation behavior of other users they interact with in a manner that disadvantages low and moderate-status users. | null | null | 10.18653/v1/2022.findings-emnlp.474 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,986 |
inproceedings | tian-etal-2022-empathetic | Empathetic and Emotionally Positive Conversation Systems with an Emotion-specific Query-Response Memory | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.475/ | Tian, Zhiliang and Wang, Yinliang and Song, Yiping and Zhang, Chi and Lee, Dongkyu and Zhao, Yingxiu and Li, Dongsheng and Zhang, Nevin L. | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6364--6376 | Emotional conversation systems generate responses for the input queries considering the speaker`s emotions in a conversation. Existing emotional conversation systems output emotional responses according to either a given emotion or the user`s emotion reflected in the input queries. Following a given emotion may lead to an emotional drift between the given emotion and the conversation state, and following only the user`s emotion may aggravate the user`s negative feelings if users suffer from a negative mood. In this paper, we propose to generate empathetic responses catering to the user`s emotions while leading the conversation to be emotionally positive. Particularly, by abstracting the conversation corpus, we extract and store the different responding strategies for different users' emotions and conversational topics into a memory. We encourage positive emotions in conversation via a sentiment evaluator. We model the memory outputs with a Gaussian mixture distribution and sample a final responding strategy from the distribution. The strategy acts as a condition to a transformer model to generate responses. The experiments verify our model surpasses the baseline methods in appropriateness, diversity, and generating emotionally positive responses. | null | null | 10.18653/v1/2022.findings-emnlp.475 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,987 |
inproceedings | wang-sun-2022-trial2vec | {T}rial2{V}ec: Zero-Shot Clinical Trial Document Similarity Search using Self-Supervision | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.476/ | Wang, Zifeng and Sun, Jimeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6377--6390 | Clinical trials are essential for drug development but are extremely expensive and time-consuming to conduct. It is beneficial to study similar historical trials when designing a clinical trial. However, lengthy trial documents and lack of labeled data make trial similarity search difficult. We propose a zero-shotclinical trial retrieval method, called Trial2Vec, which learns through self-supervision without the need for annotating similar clinical trials. Specifically, the meta-structure of trial documents (e.g., title, eligibility criteria, target disease) along with clinical knowledge (e.g., UMLS knowledge base) are leveraged to automatically generate contrastive samples. Besides, encodes trial documents considering meta-structure thus producing compact embeddings aggregating multi-aspect information from the whole document. We show that our method yields medically interpretable embeddings by visualization and it gets 15{\%} average improvement over the best baselines on precision/recall for trial retrieval, which is evaluated on our labeled 1600 trial pairs. In addition, we prove the pretrained embeddings benefit the downstream trial outcome prediction task over 240k trials. Software is available at https://github.com/RyanWangZf/Trial2Vec. | null | null | 10.18653/v1/2022.findings-emnlp.476 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,988 |
inproceedings | li-etal-2022-mimicking | From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.477/ | Li, Lei and Lin, Yankai and Ren, Xuancheng and Zhao, Guangxiang and Li, Peng and Zhou, Jie and Sun, Xu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6391--6402 | Investigating better ways to reuse the released pre-trained language models (PLMs) can significantly reduce the computational cost and the potential environmental side-effects. This paper explores a novel PLM reuse paradigm, Knowledge Integration (KI). Without human annotations available, KI aims to merge the knowledge from different teacher-PLMs, each of which specializes in a different classification problem, into a versatile student model. To achieve this, we first derive the correlation between virtual golden supervision and teacher predictions. We then design a Model Uncertainty{--}aware Knowledge Integration (MUKI) framework to recover the golden supervision for the student. Specifically, MUKI adopts Monte-Carlo Dropout to estimate model uncertainty for the supervision integration. An instance-wise re-weighting mechanism based on the margin of uncertainty scores is further incorporated, to deal with the potential conflicting supervision from teachers.Experimental results demonstrate that MUKI achieves substantial improvements over baselines on benchmark datasets. Further analysis shows that MUKI can generalize well for merging teacher models with heterogeneous architectures, and even teachers major in cross-lingual datasets. | null | null | 10.18653/v1/2022.findings-emnlp.477 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,989 |
inproceedings | garcia-ferrero-etal-2022-model | Model and Data Transfer for Cross-Lingual Sequence Labelling in Zero-Resource Settings | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.478/ | Garc{\'i}a-Ferrero, Iker and Agerri, Rodrigo and Rigau, German | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6403--6416 | Zero-resource cross-lingual transfer approaches aim to apply supervised modelsfrom a source language to unlabelled target languages. In this paper we performan in-depth study of the two main techniques employed so far for cross-lingualzero-resource sequence labelling, based either on data or model transfer.Although previous research has proposed translation and annotation projection(data-based cross-lingual transfer) as an effective technique for cross-lingualsequence labelling, in this paper we experimentally demonstrate that highcapacity multilingual language models applied in a zero-shot (model-basedcross-lingual transfer) setting consistently outperform data-basedcross-lingual transfer approaches. A detailed analysis of our results suggeststhat this might be due to important differences in language use. Morespecifically, machine translation often generates a textual signal which isdifferent to what the models are exposed to when using gold standard data,which affects both the fine-tuning and evaluation processes. Our results alsoindicate that data-based cross-lingual transfer approaches remain a competitiveoption when high-capacity multilingual language models are not available. | null | null | 10.18653/v1/2022.findings-emnlp.478 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,990 |
inproceedings | kanjirangat-etal-2022-early | Early Guessing for Dialect Identification | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.479/ | Kanjirangat, Vani and Samardzic, Tanja and Rinaldi, Fabio and Dolamic, Ljiljana | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6417--6426 | This paper deals with the problem of incre-mental dialect identification. Our goal is toreliably determine the dialect before the fullutterance is given as input. The major partof the previous research on dialect identification has been model-centric, focusing on performance. We address a new question: How much input is needed to identify a dialect? Ourapproach is a data-centric analysis that resultsin general criteria for finding the shortest inputneeded to make a plausible guess. Workingwith three sets of language dialects (Swiss German, Indo-Aryan and Arabic languages), weshow that it is possible to generalize across dialects and datasets with two input shorteningcriteria: model confidence and minimal inputlength (adjusted for the input type). The sourcecode for experimental analysis can be found atGithub. | null | null | 10.18653/v1/2022.findings-emnlp.479 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,991 |
inproceedings | ni-etal-2022-r | {R}-{AT}: Regularized Adversarial Training for Natural Language Understanding | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.480/ | Ni, Shiwen and Li, Jiawen and Kao, Hung-Yu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6427--6440 | Currently, adversarial training has become a popular and powerful regularization method in the natural language domain. In this paper, we Regularized Adversarial Training (R-AT) via dropout, which forces the output probability distributions of different sub-models generated by dropout to be consistent under the same adversarial samples. Specifically, we generate adversarial samples by perturbing the word embeddings. For each adversarial sample fed to the model, R-AT minimizes both the adversarial risk and the bidirectional KL-divergence between the adversarial output distributions of two sub-models sampled by dropout. Through extensive experiments on 13 public natural language understanding datasets, we found that R-AT has improvements for many models (e.g., rnn-based, cnn-based, and transformer-based models). For the GLUE benchmark, when R-AT is only applied to the fine-tuning stage, it is able to improve the overall test score of the BERT-base model from 78.3 to 79.6 and the RoBERTa-large model from 88.1 to 88.6. Theoretical analysis reveals that R-AT has potential gradient regularization during the training process. Furthermore, R-AT can reduce the inconsistency between training and testing of models with dropout. | null | null | 10.18653/v1/2022.findings-emnlp.480 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,992 |
inproceedings | karisani-etal-2022-multi | Multi-View Active Learning for Short Text Classification in User-Generated Data | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.481/ | Karisani, Payam and Karisani, Negin and Xiong, Li | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6441--6453 | Mining user-generated data often suffers from the lack of enough labeled data, short document lengths, and the informal user language. In this paper, we propose a novel active learning model to overcome these obstacles in the tasks tailored for query phrases{--}e.g., detecting positive reports of natural disasters. Our model has three novelties: 1) It is the first approach to employ multi-view active learning in this domain. 2) It uses the Parzen-Rosenblatt window method to integrate the representativeness measure into multi-view active learning. 3) It employs a query-by-committee strategy, based on the agreement between predictors, to address the usually noisy language of the documents in this domain. We evaluate our model in four publicly available Twitter datasets with distinctly different applications. We also compare our model with a wide range of baselines including those with multiple classifiers. The experiments testify that our model is highly consistent and outperforms existing models. | null | null | 10.18653/v1/2022.findings-emnlp.481 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,993 |
inproceedings | wu-etal-2022-forging | Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.482/ | Wu, Hongqiu and Ding, Ruixue and Zhao, Hai and Chen, Boli and Xie, Pengjun and Huang, Fei and Zhang, Min | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6454--6466 | Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling, which serves the ultimate purpose of pre-trained language models (PrLMs), generalizing well on a mass of scenarios. However, learning multiple training objectives in a single model is challenging due to the unknown relative significance as well as the potential contrariety between them. Empirical studies have shown that the current objective sampling in an ad-hoc manual setting makes the learned language representation barely converge to the desired optimum. Thus, we propose \textit{MOMETAS}, a novel adaptive sampler based on meta-learning, which learns the latent sampling pattern on arbitrary pre-training objectives. Such a design is lightweight with negligible additional training overhead. To validate our approach, we adopt five objectives and conduct continual pre-training with BERT-base and BERT-large models, where MOMETAS demonstrates universal performance gain over other rule-based sampling strategies on 14 natural language processing tasks. | null | null | 10.18653/v1/2022.findings-emnlp.482 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,994 |
inproceedings | limkonchotiwat-etal-2022-congen | {C}on{G}en: Unsupervised Control and Generalization Distillation For Sentence Representation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.483/ | Limkonchotiwat, Peerat and Ponwitayarat, Wuttikorn and Lowphansirikul, Lalita and Udomcharoenchaikit, Can and Chuangsuwanich, Ekapol and Nutanong, Sarana | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6467--6480 | Sentence representations are essential in many NLP tasks operating at the sentence level.Recently, research attention has shifted towards learning how to represent sentences without any annotations, i.e., unsupervised representation learning. Despite the benefit of training without supervised data, there is still a performance penalty compared to supervised methods.Furthermore, the supervised-unsupervised performance gap widens as we reduce the model size. In this paper, we propose an unsupervised sentence representation method to reduce the supervised-unsupervised performance gap, especially for smaller models. Utilizing the concept for knowledge distillation, we derive a distillation framework comprising two training objectives, control and generalize, called ConGen. Experiments on semantic textual similarity (STS), text classification (transfer), and natural language inference (NLI) tasks show that ConGen is on par with supervised training even on smaller models.Furthermore, our method consistently outperformed competitors on multilingual STS.The code and models are available at https://github.com/KornWtp/ConGen. | null | null | 10.18653/v1/2022.findings-emnlp.483 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,995 |
inproceedings | anil-etal-2022-large | Large-Scale Differentially Private {BERT} | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.484/ | Anil, Rohan and Ghazi, Badih and Gupta, Vineet and Kumar, Ravi and Manurangsi, Pasin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6481--6491 | In this work, we study the large-scale pretraining of BERT-Large (Devlin et al., 2019) with differentially private SGD (DP-SGD). We show that combined with a careful implementation, scaling up the batch size to millions (i.e., mega-batches) improves the utility of the DP-SGD step for BERT; we also enhance the training efficiency by using an increasing batch size schedule. Our implementation builds on the recent work of Subramani et al (2020), who demonstrated that the overhead of a DP-SGD step is minimized with effective use of JAX (Bradbury et al., 2018; Frostig et al., 2018) primitives in conjunction with the XLA compiler (XLA team and collaborators, 2017). Our implementation achieves a masked language model accuracy of 60.5{\%} at a batch size of 2M, for epsilon=5, which is a reasonable privacy setting. To put this number in perspective, non-private BERT models achieve an accuracy of {\ensuremath{\sim}}70{\%}. | null | null | 10.18653/v1/2022.findings-emnlp.484 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,996 |
inproceedings | gu-feng-2022-improving | Improving Zero-Shot Multilingual Translation with Universal Representations and Cross-Mapping | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.485/ | Gu, Shuhao and Feng, Yang | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6492--6504 | The many-to-many multilingual neural machine translation can translate between language pairs unseen during training, i.e., zero-shot translation. Improving zero-shot translation requires the model to learn universal representations and cross-mapping relationships to transfer the knowledge learned on the supervised directions to the zero-shot directions. In this work, we propose the state mover`s distance based on the optimal theory to model the difference of the representations output by the encoder. Then, we bridge the gap between the semantic-equivalent representations of different languages at the token level by minimizing the proposed distance to learn universal representations. Besides, we propose an agreement-based training scheme, which can help the model make consistent predictions based on the semantic-equivalent sentences to learn universal cross-mapping relationships for all translation directions. The experimental results on diverse multilingual datasets show that our method can improve consistently compared with the baseline system and other contrast methods. The analysis proves that our method can better align the semantic space and improve the prediction consistency. | null | null | 10.18653/v1/2022.findings-emnlp.485 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,997 |
inproceedings | hu-etal-2022-controllable | Controllable Fake Document Infilling for Cyber Deception | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.486/ | Hu, Yibo and Lin, Yu and Skorupa Parolin, Erick and Khan, Latifur and Hamlen, Kevin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6505--6519 | Recent works in cyber deception study how to deter malicious intrusion by generating multiple fake versions of a critical document to impose costs on adversaries who need to identify the correct information. However, existing approaches are context-agnostic, resulting in sub-optimal and unvaried outputs. We propose a novel context-aware model, Fake Document Infilling (FDI), by converting the problem to a controllable mask-then-infill procedure. FDI masks important concepts of varied lengths in the document, then infills a realistic but fake alternative considering both the previous and future contexts. We conduct comprehensive evaluations on technical documents and news stories. Results show that FDI outperforms the baselines in generating highly believable fakes with moderate modification to protect critical information and deceive adversaries. | null | null | 10.18653/v1/2022.findings-emnlp.486 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,998 |
inproceedings | benton-etal-2022-weakly | Weakly Supervised Headline Dependency Parsing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.487/ | Benton, Adrian and Shi, Tianze and {\.I}rsoy, Ozan and Malioutov, Igor | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6520--6535 | English news headlines form a register with unique syntactic properties that have been documented in linguistics literature since the 1930s. However, headlines have received surprisingly little attention from the NLP syntactic parsing community. We aim to bridge this gap by providing the first news headline corpus of Universal Dependencies annotated syntactic dependency trees, which enables us to evaluate existing state-of-the-art dependency parsers on news headlines. To improve English news headline parsing accuracies, we develop a projection method to bootstrap silver training data from unlabeled news headline-article lead sentence pairs. Models trained on silver headline parses demonstrate significant improvements in performance over models trained solely on gold-annotated long-form texts. Ultimately, we find that, although projected silver training data improves parser performance across different news outlets, the improvement is moderated by constructions idiosyncratic to outlet. | null | null | 10.18653/v1/2022.findings-emnlp.487 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 26,999 |
inproceedings | kryscinski-etal-2022-booksum | {BOOKSUM}: A Collection of Datasets for Long-form Narrative Summarization | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.488/ | Kryscinski, Wojciech and Rajani, Nazneen and Agarwal, Divyansh and Xiong, Caiming and Radev, Dragomir | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6536--6558 | The majority of existing text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases. While relevant, such datasets will offer limited challenges for future text summarization systems. We address these issues by introducing BOOKSUM, a collection of datasets for long-form narrative summarization. Our dataset covers documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level. The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures. To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset. | null | null | 10.18653/v1/2022.findings-emnlp.488 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,000 |
inproceedings | xu-etal-2022-errors | Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.489/ | Xu, Wenda and Tuan, Yi-Lin and Lu, Yujie and Saxon, Michael and Li, Lei and Wang, William Yang | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6559--6574 | Is it possible to build a general and automatic natural language generation (NLG) evaluation metric? Existing learned metrics either perform unsatisfactorily or are restricted to tasks where large human rating data is already available. We introduce SESCORE, a model-based metric that is highly correlated with human judgements without requiring human annotation, by utilizing a novel, iterative error synthesis and severity scoring pipeline. This pipeline applies a series of plausible errors to raw text and assigns severity labels by simulating human judgements with entailment. We evaluate SESCORE against existing metrics by comparing how their scores correlate with human ratings. SESCORE outperforms all prior unsupervised metrics on multiple diverse NLG tasks including machine translation, image captioning, and WebNLG text generation. For WMT 20/21En-De and Zh-En, SESCORE improve the average Kendall correlation with human judgement from 0.154 to 0.195. SESCORE even achieves comparable performance to the best supervised metric COMET, despite receiving no human annotated training data. | null | null | 10.18653/v1/2022.findings-emnlp.489 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,001 |
inproceedings | lu-etal-2022-summarization | Summarization as Indirect Supervision for Relation Extraction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.490/ | Lu, Keming and Hsu, I-Hung and Zhou, Wenxuan and Ma, Mingyu Derek and Chen, Muhao | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6575--6594 | Relation extraction (RE) models have been challenged by their reliance on training data with expensive annotations. Considering that summarization tasks aim at acquiring concise expressions of synoptical information from the longer context, these tasks naturally align with the objective of RE, i.e., extracting a kind of synoptical information that describes the relation of entity mentions. We present SuRE, which converts RE into a summarization formulation. SuRE leads to more precise and resource-efficient RE based on indirect supervision from summarization tasks. To achieve this goal, we develop sentence and relation conversion techniques that essentially bridge the formulation of summarization and RE tasks. We also incorporate constraint decoding techniques with Trie scoring to further enhance summarization-based RE with robust inference. Experiments on three RE datasets demonstrate the effectiveness of SuRE in both full-dataset and low-resource settings, showing that summarization is a promising source of indirect supervision signals to improve RE models. | null | null | 10.18653/v1/2022.findings-emnlp.490 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,002 |
inproceedings | mao-etal-2022-digat | {DIGAT}: Modeling News Recommendation with Dual-Graph Interaction | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.491/ | Mao, Zhiming and Li, Jian and Wang, Hongru and Zeng, Xingshan and Wong, Kam-Fai | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6595--6607 | News recommendation (NR) is essential for online news services. Existing NR methods typically adopt a news-user representation learning framework, facing two potential limitations. First, in news encoder, single candidate news encoding suffers from an insufficient semantic information problem. Second, existing graph-based NR methods are promising but lack effective news-user feature interaction, rendering the graph-based recommendation suboptimal. To overcome these limitations, we propose dual-interactive graph attention networks (DIGAT) consisting of news- and user-graph channels. In the news-graph channel, we enrich the semantics of single candidate news by incorporating the semantically relevant news information with a semantic-augmented graph (SAG). In the user-graph channel, multi-level user interests are represented with a news-topic graph. Most notably, we design a dual-graph interaction process to perform effective feature interaction between the news and user graphs, which facilitates accurate news-user representation matching. Experiment results on the benchmark dataset MIND show that DIGAT outperforms existing news recommendation methods. Further ablation studies and analyses validate the effectiveness of (1) semantic-augmented news graph modeling and (2) dual-graph interaction. | null | null | 10.18653/v1/2022.findings-emnlp.491 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,003 |
inproceedings | wang-etal-2022-smash | {SMASH}: Improving {SMA}ll Language Models' Few-{SH}ot Ability with Prompt-Based Distillation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.492/ | Wang, Yueqian and Liu, Chang and Chen, Kai and Wang, Xi and Zhao, Dongyan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6608--6619 | Large-scale language models coupled with prompts have shown remarkable performance on few-shot learning. However, through systematic experiments, we find that the few-shot performance of small language models is poor, and using prompts on them brings fewer improvements than on larger ones. In this paper, we propose SMASH, an approach to improve SMAll language models' few-SHot ability by training on intermediate tasks before prompt-based fine-tuning on downstream tasks. We design intermediate tasks for sentence-pair tasks and sentiment classification tasks by creating training examples with prompt templates similar to downstream tasks using sentences sampled from a large-scale unsupervised corpus, and apply knowledge distillation to distill from outputs of larger pre-trained models as the training objective. We conduct extensive experiments and show that SMASH can make a 6-layer DistilRoBRETa-base achieve comparable performance on few-shot datasets with a 12-layer RoBERTa-base at a low cost. | null | null | 10.18653/v1/2022.findings-emnlp.492 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,004 |
inproceedings | li-etal-2022-consecutive | Consecutive Question Generation via Dynamic Multitask Learning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.493/ | Li, Yunji and Li, Sujian and Shi, Xing | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6620--6635 | In this paper, we propose the task of consecutive question generation (CQG), which generates a set of logically related question-answer pairs to understand a whole passage, with a comprehensive consideration of the aspects including accuracy, coverage, and informativeness.To achieve this, we first examine the four key elements of CQG, i.e., question, answer, rationale, and context history, and propose a novel dynamic multitask framework with one main task generating a question-answer pair, and four auxiliary tasks generating other elements. It directly helps the model generate good questions through both joint training and self-reranking. At the same time, to fully explore the worth-asking information in a given passage, we make use of the reranking losses to sample the rationales and search for the best question series globally.Finally, we measure our strategy by QA data augmentation and manual evaluation, as well as a novel application of generated question-answer pairs on DocNLI. We prove that our strategy can improve question generation significantly and benefit multiple related NLP tasks. | null | null | 10.18653/v1/2022.findings-emnlp.493 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,005 |
inproceedings | meyer-buys-2022-subword | Subword Segmental Language Modelling for Nguni Languages | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.494/ | Meyer, Francois and Buys, Jan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6636--6649 | Subwords have become the standard units of text in NLP, enabling efficient open-vocabulary models. With algorithms like byte-pair encoding (BPE), subword segmentation is viewed as a preprocessing step applied to the corpus before training. This can lead to sub-optimal segmentations for low-resource languages with complex morphologies. We propose a subword segmental language model (SSLM) that learns how to segment words while being trained for autoregressive language modelling. By unifying subword segmentation and language modelling, our model learns subwords that optimise LM performance. We train our model on the 4 Nguni languages of South Africa. These are low-resource agglutinative languages, so subword information is critical. As an LM, SSLM outperforms existing approaches such as BPE-based models on average across the 4 languages. Furthermore, it outperforms standard subword segmenters on unsupervised morphological segmentation. We also train our model as a word-level sequence model, resulting in an unsupervised morphological segmenter that outperforms existing methods by a large margin for all 4 languages. Our results show that learning subword segmentation is an effective alternative to existing subword segmenters, enabling the model to discover morpheme-like subwords that improve its LM capabilities. | null | null | 10.18653/v1/2022.findings-emnlp.494 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,006 |
inproceedings | si-etal-2022-towards | Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.495/ | Si, Qingyi and Liu, Yuanxin and Meng, Fandong and Lin, Zheng and Fu, Peng and Cao, Yanan and Wang, Weiping and Zhou, Jie | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6650--6662 | Models for Visual Question Answering (VQA) often rely on the spurious correlations, i.e., the language priors, that appear in the biased samples of training set, which make them brittle against the out-of-distribution (OOD) test data. Recent methods have achieved promising progress in overcoming this problem by reducing the impact of biased samples on model training. However, these models reveal a trade-off that the improvements on OOD data severely sacrifice the performance on the in-distribution (ID) data (which is dominated by the biased samples). Therefore, we propose a novel contrastive learning approach, MMBS, for building robust VQA models by Making the Most of Biased Samples. Specifically, we construct positive samples for contrastive learning by eliminating the information related to spurious correlation from the original training samples and explore several strategies to use the constructed positive samples for training. Instead of undermining the importance of biased samples in model training, our approach precisely exploits the biased samples for unbiased information that contributes to reasoning. The proposed method is compatible with various VQA backbones. We validate our contributions by achieving competitive performance on the OOD dataset VQA-CP v2 while preserving robust performance on the ID dataset VQA v2. | null | null | 10.18653/v1/2022.findings-emnlp.495 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,007 |
inproceedings | bao-etal-2022-p3lm | {P}3{LM}: Probabilistically Permuted Prophet Language Modeling for Generative Pre-Training | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.496/ | Bao, Junwei and Wang, Yifan and Jiangyong, Ying and Gong, Yeyun and Zhao, Jing and Wu, Youzheng and He, Xiaodong | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6663--6675 | Conventional autoregressive left-to-right (L2R) sequence generation faces two issues during decoding: limited to unidirectional target sequence modeling, and constrained on strong local dependencies.To address the aforementioned problem, we propose P3LM, a probabilistically permuted prophet language model, which strengthens the modeling of bidirectional information and long token dependencies for sequence generation.Specifically, P3LM learns to generate tokens in permuted order upon an order-aware transformer decoder, as well as to generate the corresponding future N tokens with a multi-stream attention mechanism.Extensive experiments are conducted on the GLGE benchmark, which includes four datasets for summarization, two for question generation, one for conversational question answering, and one for dialog response generation, where P3LM achieves state-of-the-art results compared with strong publicly available generative pre-training methods. | null | null | 10.18653/v1/2022.findings-emnlp.496 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,008 |
inproceedings | chen-etal-2022-holistic | Holistic Sentence Embeddings for Better Out-of-Distribution Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.497/ | Chen, Sishuo and Bi, Xiaohan and Gao, Rundong and Sun, Xu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6676--6686 | Detecting out-of-distribution (OOD) instances is significant for the safe deployment of NLP models. Among recent textual OOD detection works based on pretrained language models (PLMs), distance-based methods have shown superior performance. However, they estimate sample distance scores in the last-layer CLS embedding space and thus do not make full use of linguistic information underlying in PLMs. To address the issue, we propose to boost OOD detection by deriving more holistic sentence embeddings. On the basis of the observations that token averaging and layer combination contribute to improving OOD detection, we propose a simple embedding approach named Avg-Avg, which averages all token representations from each intermediate layer as the sentence embedding and significantly surpasses the state-of-the-art on a comprehensive suite of benchmarks by a 9.33{\%} FAR95 margin. Furthermore, our analysis demonstrates that it indeed helps preserve general linguistic knowledge in fine-tuned PLMs and substantially benefits detecting background shifts. The simple yet effective embedding method can be applied to fine-tuned PLMs with negligible extra costs, providing a free gain in OOD detection. Our code is available at https://github.com/lancopku/Avg-Avg. | null | null | 10.18653/v1/2022.findings-emnlp.497 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,009 |
inproceedings | wang-etal-2022-muger2 | {M}u{GER}2: Multi-Granularity Evidence Retrieval and Reasoning for Hybrid Question Answering | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.498/ | Wang, Yingyao and Bao, Junwei and Duan, Chaoqun and Wu, Youzheng and He, Xiaodong and Zhao, Tiejun | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6687--6697 | Hybrid question answering (HQA) aims to answer questions over heterogeneous data, including tables and passages linked to table cells. The heterogeneous data can provide different granularity evidence to HQA models, e.t., column, row, cell, and link. Conventional HQA models usually retrieve coarse- or fine-grained evidence to reason the answer. Through comparison, we find that coarse-grained evidence is easier to retrieve but contributes less to the reasoner, while fine-grained evidence is the opposite. To preserve the advantage and eliminate the disadvantage of different granularity evidence, we propose MuGER2, a Multi-Granularity Evidence Retrieval and Reasoning approach. In evidence retrieval, a unified retriever is designed to learn the multi-granularity evidence from the heterogeneous data. In answer reasoning, an evidence selector is proposed to navigate the fine-grained evidence for the answer reader based on the learned multi-granularity evidence. Experiment results on the HybridQA dataset show that MuGER2 significantly boosts the HQA performance. Further ablation analysis verifies the effectiveness of both the retrieval and reasoning designs. | null | null | 10.18653/v1/2022.findings-emnlp.498 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,010 |
inproceedings | whitehouse-etal-2022-entitycs | {E}ntity{CS}: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.499/ | Whitehouse, Chenxi and Christopoulou, Fenia and Iacobacci, Ignacio | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6698--6714 | Accurate alignment between languages is fundamental for improving cross-lingual pre-trained language models (XLMs). Motivated by the natural phenomenon of code-switching (CS) in multilingual speakers, CS has been used as an effective data augmentation method that offers language alignment at word- or phrase-level, in contrast to sentence-level via parallel instances. Existing approaches either use dictionaries or parallel sentences with word-alignment to generate CS data by randomly switching words in a sentence. However, such methods can be suboptimal as dictionaries disregard semantics, and syntax might become invalid after random word switching. In this work, we propose EntityCS, a method that focuses on Entity-level Code-Switching to capture fine-grained cross-lingual semantics without corrupting syntax. We use Wikidata and the English Wikipedia to construct an entity-centric CS corpus by switching entities to their counterparts in other languages. We further propose entity-oriented masking strategies during intermediate model training on the EntityCS corpus for improving entity prediction. Evaluation of the trained models on four entity-centric downstream tasks shows consistent improvements over the baseline with a notable increase of 10{\%} in Fact Retrieval. We release the corpus and models to assist research on code-switching and enriching XLMs with external knowledge. | null | null | 10.18653/v1/2022.findings-emnlp.499 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,011 |
inproceedings | sang-etal-2022-mbti | {MBTI} Personality Prediction for Fictional Characters Using Movie Scripts | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.500/ | Sang, Yisi and Mou, Xiangyang and Yu, Mo and Wang, Dakuo and Li, Jing and Stanton, Jeffrey | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6715--6724 | An NLP model that understands stories should be able to understand the characters in them. To support the development of neural models for this purpose, we construct a benchmark, Story2Personality. The task is to predict a movie character`s MBTI or Big 5 personality types based on the narratives of the character. Experiments show that our task is challenging for the existing text classification models, as none is able to largely outperform random guesses. We further proposed a multi-view model for personality prediction using both verbal and non-verbal descriptions, which gives improvement compared to using only verbal descriptions. The uniqueness and challenges in our dataset call for the development of narrative comprehension techniques from the perspective of understanding characters. | null | null | 10.18653/v1/2022.findings-emnlp.500 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,012 |
inproceedings | kobayashi-etal-2022-simple | A Simple and Strong Baseline for End-to-End Neural {RST}-style Discourse Parsing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.501/ | Kobayashi, Naoki and Hirao, Tsutomu and Kamigaito, Hidetaka and Okumura, Manabu and Nagata, Masaaki | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6725--6737 | To promote and further develop RST-style discourse parsing models, we need a strong baseline that can be regarded as a reference for reporting reliable experimental results. This paper explores a strong baseline by integrating existing simple parsing strategies, top-down and bottom-up, with various transformer-based pre-trained language models.The experimental results obtained from two benchmark datasets demonstrate that the parsing performance strongly relies on the pre-trained language models rather than the parsing strategies.In particular, the bottom-up parser achieves large performance gains compared to the current best parser when employing DeBERTa.We further reveal that language models with a span-masking scheme especially boost the parsing performance through our analysis within intra- and multi-sentential parsing, and nuclearity prediction. | null | null | 10.18653/v1/2022.findings-emnlp.501 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,013 |
inproceedings | arps-etal-2022-probing | Probing for Constituency Structure in Neural Language Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.502/ | Arps, David and Samih, Younes and Kallmeyer, Laura and Sajjad, Hassan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6738--6757 | In this paper, we investigate to which extent contextual neural language models (LMs) implicitly learn syntactic structure. More concretely, we focus on constituent structure as represented in the Penn Treebank (PTB). Using standard probing techniques based on diagnostic classifiers, we assess the accuracy of representing constituents of different categories within the neuron activations of a LM such as RoBERTa. In order to make sure that our probe focuses on syntactic knowledge and not on implicit semantic generalizations, we also experiment on a PTB version that is obtained by randomly replacing constituents with each other while keeping syntactic structure, i.e., a semantically ill-formed but syntactically well-formed version of the PTB. We find that 4 pretrained transfomer LMs obtain high performance on our probing tasks even on manipulated data, suggesting that semantic and syntactic knowledge in their representations can be separated and that constituency information is in fact learned by the LM. Moreover, we show that a complete constituency tree can be linearly separated from LM representations. | null | null | 10.18653/v1/2022.findings-emnlp.502 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,014 |
inproceedings | andrejczuk-etal-2022-table | Table-To-Text generation and pre-training with {T}ab{T}5 | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.503/ | Andrejczuk, Ewa and Eisenschlos, Julian and Piccinno, Francesco and Krichene, Syrine and Altun, Yasemin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6758--6766 | Encoder-only transformer models have been successfully applied to different table understanding tasks, as in TAPAS. A major limitation of these architectures is that they are constrained to classification-like tasks such as cell selection or entailment detection. We present TabT5, an encoder-decoder model that generates natural language text based on tables and textual inputs. TabT5 overcomes the encoder-only limitation by incorporating a decoder component and leverages the input structure with table specific embeddings and pre-training. TabT5 achieves new state-of-the-art results on several domains, including spreadsheet formula prediction with a 15{\%} increase in sequence accuracy, QA with a 2.5{\%} increase in sequence accuracy and data-to-text generation with a 2.5{\%} increase in BLEU. | null | null | 10.18653/v1/2022.findings-emnlp.503 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,015 |
inproceedings | zare-etal-2022-pomdp | A {POMDP} Dialogue Policy with 3-way Grounding and Adaptive {S}ensing for Learning through Communication | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.504/ | Zare, Maryam and Wagner, Alan and Passonneau, Rebecca | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6767--6780 | Agents to assist with rescue, surgery, and similar activities could collaborate better with humans if they could learn new strategic behaviors through communication. We introduce a novel POMDP dialogue policy for learning from people. The policy has 3-way grounding of language in the shared physical context, the dialogue context, and persistent knowledge. It can learn distinct but related games, and can continue learning across dialogues for complex games. A novel sensing component supports adaptation to information-sharing differences across people. The single policy performs better than oracle policies customized to specific games and information behavior. | null | null | 10.18653/v1/2022.findings-emnlp.504 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,016 |
inproceedings | qasemi-etal-2022-paco | {P}a{C}o: Preconditions Attributed to Commonsense Knowledge | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.505/ | Qasemi, Ehsan and Ilievski, Filip and Chen, Muhao and Szekely, Pedro | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6781--6796 | Humans can seamlessly reason with circumstantial preconditions of commonsense knowledge. We understand that a glass is used for drinking water, unless the glass is broken or the water is toxic. Despite state-of-the-art (SOTA) language models' (LMs) impressive performance on inferring commonsense knowledge, it is unclear whether they understand the circumstantial preconditions. To address this gap, we propose a novel challenge of reasoning with circumstantial preconditions. We collect a dataset, called PaCo, consisting of 12.4 thousand preconditions of commonsense statements expressed in natural language. Based on this dataset, we create three canonical evaluation tasks and use them to examine the capability of existing LMs to understand situational preconditions. Our results reveal a 10-30{\%} gap between machine and human performance on our tasks, which shows that reasoning with preconditions is an open challenge. | null | null | 10.18653/v1/2022.findings-emnlp.505 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,017 |
inproceedings | blair-bar-2022-improving | Improving Few-Shot Domain Transfer for Named Entity Disambiguation with Pattern Exploitation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.506/ | Blair, Philip and Bar, Kfir | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6797--6810 | Named entity disambiguation (NED) is a critical subtask of entity linking, which seeks to connect knowledge base entities with textual mentions of those entities. Naturally, the performance of a model depends on the domain it was trained on; thus, reducing the amount of data required to train models is advantageous. In this work, we leverage recent research on pattern exploitation for NED and explore whether it can reduce the amount of data required for domain adaptation by reformulating the disambiguation task as a masked language modeling problem. Using ADAPET (Tam et al., 2021), which implements a new approach for few-shot learning using fine-tuned transformer-based language models, we produce an NED model which yields, without any sacrifice of in-domain accuracy, a 7{\%} improvement in zero-shot cross-domain performance as evaluated on NEDMed, a new NED dataset of mental health news which we release with this work. | null | null | 10.18653/v1/2022.findings-emnlp.506 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,018 |
inproceedings | guo-etal-2022-capturing | Capturing Topic Framing via Masked Language Modeling | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.507/ | Guo, Xiaobo and Ma, Weicheng and Vosoughi, Soroush | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6811--6825 | Differential framing of issues can lead to divergent world views on important issues. This is especially true in domains where the information presented can reach a large audience, such as traditional and social media. Scalable and reliable measurement of such differential framing is an important first step in addressing them. In this work, based on the intuition that framing affects the tone and word choices in written language, we propose a framework for modeling the differential framing of issues through masked token prediction via large-scale fine-tuned language models (LMs). Specifically, we explore three key factors for our framework: 1) prompt generation methods for the masked token prediction; 2) methods for normalizing the output of fine-tuned LMs; 3) robustness to the choice of pre-trained LMs used for fine-tuning. Through experiments on a dataset of articles from traditional media outlets covering five diverse and politically polarized topics, we show that our framework can capture differential framing of these topics with high reliability. | null | null | 10.18653/v1/2022.findings-emnlp.507 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,019 |
inproceedings | liu-etal-2022-wanli | {WANLI}: Worker and {AI} Collaboration for Natural Language Inference Dataset Creation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.508/ | Liu, Alisa and Swayamdipta, Swabha and Smith, Noah A. and Choi, Yejin | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6826--6847 | A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. We introduce a novel approach for dataset creation based on worker and AI collaboration, which brings together the generative strength of language models and the evaluative strength of humans. Starting with an existing dataset, MultiNLI for natural language inference (NLI), our approach uses dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose new examples with similar patterns. Machine generated examples are then automatically filtered, and finally revised and labeled by human crowdworkers. The resulting dataset, WANLI, consists of 107,885 NLI examples and presents unique empirical strengths over existing NLI datasets. Remarkably, training a model on WANLI improves performance on eight out-of-domain test sets we consider, including by 11{\%} on HANS and 9{\%} on Adversarial NLI, compared to training on the 4x larger MultiNLI. Moreover, it continues to be more effective than MultiNLI augmented with other NLI datasets. Our results demonstrate the promise of leveraging natural language generation techniques and re-imagining the role of humans in the dataset creation process. | null | null | 10.18653/v1/2022.findings-emnlp.508 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,020 |
inproceedings | spangher-etal-2022-sequentially | Sequentially Controlled Text Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.509/ | Spangher, Alexander and Ming, Yao and Hua, Xinyu and Peng, Nanyun | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6848--6866 | While GPT-2 generates sentences that are remarkably human-like, longer documents can ramble and do not follow human-like writing structure. We study the problem of imposing structure on long-range text. We propose a novel controlled text generation task, sequentially controlled text generation, and identify a dataset, NewsDiscourse as a starting point for this task. We develop a sequential controlled text generation pipeline with generation and editing. We test different degrees of structural awareness and show that, in general, more structural awareness results in higher control- accuracy, grammaticality, coherency and topicality, approaching human-level writing performance. | null | null | 10.18653/v1/2022.findings-emnlp.509 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,021 |
inproceedings | gu-etal-2022-revisiting | Revisiting the Roles of {\textquotedblleft}Text{\textquotedblright} in Text Games | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.510/ | Gu, Yi and Yao, Shunyu and Gan, Chuang and Tenenbaum, Josh and Yu, Mo | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6867--6876 | Text games present opportunities for natural language understanding (NLU) methods to tackle reinforcement learning (RL) challenges. However, recent work has questioned the necessity of NLU by showing random text hashes could perform decently. In this paper, we pursue a fine-grained investigation into the roles of text in the face of different RL challenges, and reconcile that semantic and non-semantic language representations could be complementary rather than contrasting. Concretely, we propose a simple scheme to extract relevant contextual information into an approximate state hash as extra input for an RNN-based text agent. Such a lightweight plug-in achieves competitive performance with state-of-the-art text agents using advanced NLU techniques such as knowledge graph and passage retrieval, suggesting non-NLU methods might suffice to tackle the challenge of partial observability. However, if we remove RNN encoders and use approximate or even ground-truth state hash alone, the model performs miserably, which confirms the importance of semantic function approximation to tackle the challenge of combinatorially large observation and action spaces. Our findings and analysis provide new insights for designing better text game task setups and agents. | null | null | 10.18653/v1/2022.findings-emnlp.510 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,022 |
inproceedings | huang-etal-2022-fpt | {FPT}: Improving Prompt Tuning Efficiency via Progressive Training | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.511/ | Huang, Yufei and Qin, Yujia and Wang, Huadong and Yin, Yichun and Sun, Maosong and Liu, Zhiyuan and Liu, Qun | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6877--6887 | Recently, prompt tuning (PT) has gained increasing attention as a parameter-efficient way of tuning pre-trained language models (PLMs). Despite extensively reducing the number of tunable parameters and achieving satisfying performance, PT is training-inefficient due to its slow convergence. To improve PT`s training efficiency, we first make some novel observations about the prompt transferability of {\textquotedblleft}partial PLMs{\textquotedblright}, which are defined by compressing a PLM in depth or width. We observe that the soft prompts learned by different partial PLMs of various sizes are similar in the parameter space, implying that these soft prompts could potentially be transferred among partial PLMs. Inspired by these observations, we propose Fast Prompt Tuning (FPT), which starts by conducting PT using a small-scale partial PLM, and then progressively expands its depth and width until the full-model size. After each expansion, we recycle the previously learned soft prompts as initialization for the enlarged partial PLM and then proceed PT. We demonstrate the feasibility of FPT on 5 tasks and show that FPT could save over 30{\%} training computations while achieving comparable performance. The codes are publicly available at https://github.com/thunlp/FastPromptTuning. | null | null | 10.18653/v1/2022.findings-emnlp.511 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,023 |
inproceedings | ding-etal-2022-prompt | Prompt-learning for Fine-grained Entity Typing | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.512/ | Ding, Ning and Chen, Yulin and Han, Xu and Xu, Guangwei and Wang, Xiaobin and Xie, Pengjun and Zheng, Haitao and Liu, Zhiyuan and Li, Juanzi and Kim, Hong-Gee | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6888--6901 | As an effective approach to adapting pre-trained language models (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using cloze-style language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot, and zero-shot scenarios. We first develop a simple and effective prompt-learning pipeline by constructing entity-oriented verbalizers and templates and conducting masked language modeling. Further, to tackle the zero-shot regime, we propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types. Extensive experiments on four fine-grained entity typing benchmarks under fully supervised, few-shot, and zero-shot settings show the effectiveness of the prompt-learning paradigm and further make a powerful alternative to vanilla fine-tuning. | null | null | 10.18653/v1/2022.findings-emnlp.512 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,024 |
inproceedings | sandhan-etal-2022-translist | {T}rans{LIST}: A Transformer-Based Linguistically Informed {S}anskrit Tokenizer | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.513/ | Sandhan, Jivnesh and Singha, Rathin and Rao, Narein and Samanta, Suvendu and Behera, Laxmidhar and Goyal, Pawan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6902--6912 | Sanskrit Word Segmentation (SWS) is essential in making digitized texts available and in deploying downstream tasks. It is, however, non-trivial because of the sandhi phenomenon that modifies the characters at the word boundaries, and needs special treatment. Existing lexicon driven approaches for SWS make use of Sanskrit Heritage Reader, a lexicon-driven shallow parser, to generate the complete candidate solution space, over which various methods are applied to produce the most valid solution. However, these approaches fail while encountering out-of-vocabulary tokens. On the other hand, purely engineering methods for SWS have made use of recent advances in deep learning, but cannot make use of the latent word information on availability. To mitigate the shortcomings of both families of approaches, we propose Transformer based Linguistically Informed Sanskrit Tokenizer (TransLIST) consisting of (1) a module that encodes the character input along with latent-word information, which takes into account the sandhi phenomenon specific to SWS and is apt to work with partial or no candidate solutions, (2) a novel soft-masked attention to prioritize potential candidate words and (3) a novel path ranking algorithm to rectify the corrupted predictions. Experiments on the benchmark datasets for SWS show that TransLIST outperforms the current state-of-the-art system by an average 7.2 points absolute gain in terms of perfect match (PM) metric. | null | null | 10.18653/v1/2022.findings-emnlp.513 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,025 |
inproceedings | maheshwari-etal-2022-fair | Fair {NLP} Models with Differentially Private Text Encoders | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.514/ | Maheshwari, Gaurav and Denis, Pascal and Keller, Mikaela and Bellet, Aur{\'e}lien | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6913--6930 | Encoded text representations often capture sensitive attributes about individuals (e.g., race or gender), which raise privacy concerns and can make downstream models unfair to certain groups. In this work, we propose FEDERATE, an approach that combines ideas from differential privacy and adversarial training to learn private text representations which also induces fairer models. We empirically evaluate the trade-off between the privacy of the representations and the fairness and accuracy of the downstream model on four NLP datasets. Our results show that FEDERATE consistently improves upon previous methods, and thus suggest that privacy and fairness can positively reinforce each other. | null | null | 10.18653/v1/2022.findings-emnlp.514 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,026 |
inproceedings | wu-etal-2022-modeling | Modeling Context With Linear Attention for Scalable Document-Level Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.515/ | Wu, Zhaofeng and Peng, Hao and Pappas, Nikolaos and Smith, Noah A. | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6931--6939 | Document-level machine translation leverages inter-sentence dependencies to produce more coherent and consistent translations. However, these models, predominantly based on transformers, are difficult to scale to long documents as their attention layers have quadratic complexity in the sequence length. Recent efforts on efficient attention improve scalability, but their effect on document translation remains unexplored. In this work, we investigate the efficacy of a recent linear attention model by Peng et al. (2021) on document translation and augment it with a sentential gate to promote a recency inductive bias. We evaluate the model on IWSLT 2015 and OpenSubtitles 2018 against the transformer, demonstrating substantially increased decoding speed on long sequences with similar or better BLEU scores. We show that sentential gating further improves translation quality on IWSLT. | null | null | 10.18653/v1/2022.findings-emnlp.515 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,027 |
inproceedings | madasu-srivastava-2022-large | What do Large Language Models Learn beyond Language? | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.516/ | Madasu, Avinash and Srivastava, Shashank | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6940--6953 | Large language models (LMs) have rapidly become a mainstay in Natural Language Processing. These models are known to acquire rich linguistic knowledge from training on large amounts of text. In this paper, we investigate if pre-training on text also confers these models with helpful {\textquoteleft}inductive biases' for non-linguistic reasoning. On a set of 19 diverse non-linguistic tasks involving quantitative computations, recognizing regular expressions and reasoning over strings. We find that pretrained models significantly outperform comparable non-pretrained neural models. This remains true also in experiments with training non-pretrained models with fewer parameters to account for model regularization effects. We further explore the effect of text domain on LMs by pretraining models from text from different domains and provenances. Our experiments surprisingly reveal that the positive effects of pre-training persist even when pretraining on multi-lingual text or computer code, and even for text generated from synthetic languages. Our findings suggest a hithertho unexplored deep connection between pre-training and inductive learning abilities of language models | null | null | 10.18653/v1/2022.findings-emnlp.516 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,028 |
inproceedings | chakrabarty-etal-2022-consistent | {CONSISTENT}: Open-Ended Question Generation From News Articles | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.517/ | Chakrabarty, Tuhin and Lewis, Justin and Muresan, Smaranda | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6954--6968 | Recent work on question generation has largely focused on factoid questions such as who, what,where, when about basic facts. Generating open-ended why, how, what, etc. questions thatrequire long-form answers have proven more difficult. To facilitate the generation of openended questions, we propose CONSISTENT, a new end-to-end system for generating openended questions that are answerable from and faithful to the input text. Using news articles asa trustworthy foundation for experimentation, we demonstrate our model`s strength over several baselines using both automatic and human based evaluations. We contribute an evaluationdataset of expert-generated open-ended questions. We discuss potential downstream applications for news media organizations. | null | null | 10.18653/v1/2022.findings-emnlp.517 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,029 |
inproceedings | guo-etal-2022-efficient | Efficient (Soft) {Q}-Learning for Text Generation with Limited Good Data | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.518/ | Guo, Han and Tan, Bowen and Liu, Zhengzhong and Xing, Eric and Hu, Zhiting | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6969--6991 | Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models. This paradigm relies on direct supervision examples, which is not applicable to many emerging applications, such as generating adversarial attacks or generating prompts to control language models. Reinforcement learning (RL) on the other hand offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward. Yet previous RL algorithms for text generation, such as policy gradient (on-policy RL) and Q-learning (off-policy RL), are often notoriously inefficient or unstable to train due to the large sequence space and the sparse reward received only at the end of sequences. In this paper, we introduce a new RL formulation for text generation from the soft Q-learning (SQL) perspective. It enables us to draw from the latest RL advances, such as path consistency learning, to combine the best of on-/off-policy updates, and learn effectively from sparse reward. We apply the approach to a wide range of novel text generation tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation. Experiments show our approach consistently outperforms both task-specialized algorithms and the previous RL methods. | null | null | 10.18653/v1/2022.findings-emnlp.518 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,030 |
inproceedings | banerjee-etal-2022-lexi | {L}exi: Self-Supervised Learning of the {UI} Language | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.519/ | Banerjee, Pratyay and Mahajan, Shweti and Arora, Kushal and Baral, Chitta and Riva, Oriana | Findings of the Association for Computational Linguistics: EMNLP 2022 | 6992--7007 | Humans can learn to operate the user interface (UI) of an application by reading an instruction manual or how-to guide. Along with text, these resources include visual content such as UI screenshots and images of application icons referenced in the text. We explore how to leverage this data to learn generic visio-linguistic representations of UI screens and their components. These representations are useful in many real applications, such as accessibility, voice navigation, and task automation. Prior UI representation models rely on UI metadata (UI trees and accessibility labels), which is often missing, incompletely defined, or not accessible. We avoid such a dependency, and propose Lexi, a pre-trained vision and language model designed to handle the unique features of UI screens, including their text richness and context sensitivity. To train Lexi we curate the UICaption dataset consisting of 114k UI images paired with descriptions of their functionality. We evaluate Lexi on four tasks: UI action entailment, instruction-based UI image retrieval, grounding referring expressions, and UI entity recognition. | null | null | 10.18653/v1/2022.findings-emnlp.519 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,031 |
inproceedings | peng-etal-2022-inferring | Inferring the Reader: Guiding Automated Story Generation with Commonsense Reasoning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.520/ | Peng, Xiangyu and Li, Siyan and Wiegreffe, Sarah and Riedl, Mark | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7008--7029 | Transformer-based language model approaches to automated story generation currently provide state-of-the-art results. However, they still suffer from plot incoherence when generatingnarratives over time, and critically lack basiccommonsense reasoning. Furthermore, existing methods generally focus only on single-character stories, or fail to track charactersat all. To improve the coherence of generated narratives and to expand the scope ofcharacter-centric narrative generation, we introduce Commonsense-inference Augmentedneural StoryTelling (CAST), a framework forintroducing commonsense reasoning into thegeneration process with the option to model theinteraction between multiple characters. Wefind that our CAST method produces significantly more coherent, on-topic, enjoyable andfluent stories than existing models in both thesingle-character and two-character settings inthree storytelling domains. | null | null | 10.18653/v1/2022.findings-emnlp.520 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,032 |
inproceedings | wang-xin-2022-stop | How to Stop an Avalanche? {J}o{D}e{M}: Joint Decision Making through Compare and Contrast for Dialog State Tracking | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.521/ | Wang, Haoming and Xin, Wang | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7030--7041 | Dialog state tracking (DST) is a core component in task-oriented dialog systems. Existing state-of-the-art DST model incorporates insight and intuition from the human experience into design of supplementary labels, which greatly assisted the training process of turn-by-turn DST model. Though the turn-by-turn scheme and supplementary labels enabled satisfactory performance on the task, most of the DST models of this fashion label or process the raw dialogue data on the premise that the last turn dialogue state is always correct, which is usually not the case. In this paper, we address the negative impact resulted from the premise above as the avalanche phenomenon. After that, we propose JoDeM, a state-of-the-art DST model which can tackle the Avalanche phenomenon with two mechanisms. First mechanism is a jointly decision making method to extract key information from the dialogue. Second mechanism is a compare and contrast dialogue update technique to prevent error accumulation. Example study and graph analysis are presented to support our claim about the harmfulness of avalanche phenomenon. We also conduct quantitative and qualitative experiments on the high quality MultiWOZ2.3 corpus dataset to demonstrate that the proposed model not only outperforms the existing state-of-the-art methods, but also proves the validity of solving avalanche degradation problem. | null | null | 10.18653/v1/2022.findings-emnlp.521 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,033 |
inproceedings | zeng-etal-2022-contrastive | Contrastive Learning with Prompt-derived Virtual Semantic Prototypes for Unsupervised Sentence Embedding | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.522/ | Zeng, Jiali and Yin, Yongjing and Jiang, Yufan and Wu, Shuangzhi and Cao, Yunbo | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7042--7053 | Contrastive learning has become a new paradigm for unsupervised sentence embeddings.Previous studies focus on instance-wise contrastive learning, attempting to construct positive pairs with textual data augmentation. In this paper, we propose a novel Contrastive learning method with Prompt-derived Virtual semantic Prototypes (ConPVP). Specifically, with the help of prompts, we construct virtual semantic prototypes to each instance, and derive negative prototypes by using the negative form of the prompts.Using a prototypical contrastive loss, we enforce the anchor sentence embedding to be close to its corresponding semantic prototypes, and far apart from the negative prototypes as well as the prototypes of other sentences.Extensive experimental results on semantic textual similarity, transfer, and clustering tasks demonstrate the effectiveness of our proposed model compared to strong baselines.Code is available at https://github.com/lemon0830/promptCSE. | null | null | 10.18653/v1/2022.findings-emnlp.522 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,034 |
inproceedings | xu-etal-2022-weight | Weight Perturbation as Defense against Adversarial Word Substitutions | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.523/ | Xu, Jianhan and Li, Linyang and Zhang, Jiping and Zheng, Xiaoqing and Chang, Kai-Wei and Hsieh, Cho-Jui and Huang, Xuanjing | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7054--7063 | The existence and pervasiveness of textual adversarial examples have raised serious concerns to security-critical applications. Many methods have been developed to defend against adversarial attacks for neural natural language processing (NLP) models.Adversarial training is one of the most successful defense methods by adding some random or intentional perturbations to the original input texts and making the models robust to the perturbed examples.In this study, we explore the feasibility of improving the adversarial robustness of NLP models by performing perturbations in the parameter space rather than the input feature space.The weight perturbation helps to find a better solution (i.e., the values of weights) that minimizes the adversarial loss among other feasible solutions.We found that the weight perturbation can significantly improve the robustness of NLP models when it is combined with the perturbation in the input embedding space, yielding the highest accuracy on both clean and adversarial examples across different datasets. | null | null | 10.18653/v1/2022.findings-emnlp.523 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,035 |
inproceedings | wang-etal-2022-cort | {CORT}: A New Baseline for Comparative Opinion Classification by Dual Prompts | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.524/ | Wang, Yequan and Zhang, Hengran and Sun, Aixin and Meng, Xuying | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7064--7075 | Comparative opinion is a common linguistic phenomenon. The opinion is expressed by comparing multiple targets on a shared aspect, e.g., {\textquotedblleft}camera A is better than camera B in picture quality{\textquotedblright}. Among the various subtasks in opinion mining, comparative opinion classification is relatively less studied. Current solutions use rules or classifiers to identify opinions, i.e., better, worse, or same, through feature engineering. Because the features are directly derived from the input sentence, these solutions are sensitive to the order of the targets mentioned in the sentence. For example, {\textquotedblleft}camera A is better than camera B{\textquotedblright} means the same as {\textquotedblleft}camera B is worse than camera A{\textquotedblright}; but the features of these two sentences are completely different. In this paper, we approach comparative opinion classification through prompt learning, taking the advantage of embedded knowledge in pre-trained language model. We design a twin framework with dual prompts, named CORT. This extremely simple model delivers state-of-the-art and robust performance on all benchmark datasets for comparative opinion classification. We believe CORT well serves as a new baseline for comparative opinion classification. | null | null | 10.18653/v1/2022.findings-emnlp.524 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,036 |
inproceedings | yang-etal-2022-apeach | {APEACH}: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.525/ | Yang, Kichang and Jang, Wonjun and Cho, Won Ik | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7076--7086 | In hate speech detection, developing training and evaluation datasets across various domains is the critical issue. Whereas, major approaches crawl social media texts and hire crowd-workers to annotate the data. Following this convention often restricts the scope of pejorative expressions to a single domain lacking generalization. Sometimes domain overlap between training corpus and evaluation set overestimate the prediction performance when pretraining language models on low-data language. To alleviate these problems in Korean, we propose APEACH that asks unspecified users to generate hate speech examples followed by minimal post-labeling. We find that APEACH can collect useful datasets that are less sensitive to the lexical overlaps between the pretraining corpus and the evaluation set, thereby properly measuring the model performance. | null | null | 10.18653/v1/2022.findings-emnlp.525 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,037 |
inproceedings | peng-etal-2022-guiding | Guiding Neural Story Generation with Reader Models | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.526/ | Peng, Xiangyu and Xie, Kaige and Alabdulkarim, Amal and Kayam, Harshith and Dani, Samihan and Riedl, Mark | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7087--7111 | Automated storytelling has long captured the attention of researchers for the ubiquity of narratives in everyday life. However, it is challenging to maintain coherence and stay on-topictoward a specific ending when generating narratives with neural language models. In this paper, we introduce Story generation with ReaderModels (StoRM), a framework in which areader model is used to reason about the storyshould progress. A reader model infers whata human reader believes about the concepts,entities, and relations about the fictional storyworld. We show how an explicit reader modelrepresented as a knowledge graph affords the storycoherence and provides controllability in theform of achieving a given story world stategoal. Experiments show that our model produces significantly more coherent and on-topicstories, outperforming baselines in dimensionsincluding plot plausibility and staying on topic | null | null | 10.18653/v1/2022.findings-emnlp.526 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,038 |
inproceedings | adolphs-etal-2022-reason | Reason first, then respond: Modular Generation for Knowledge-infused Dialogue | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.527/ | Adolphs, Leonard and Shuster, Kurt and Urbanek, Jack and Szlam, Arthur and Weston, Jason | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7112--7132 | Large language models can produce fluent dialogue but often hallucinate factual inaccuracies. While retrieval-augmented models help alleviate this issue, they still face a difficult challenge of both reasoning to provide correct knowledge and generating conversation simultaneously. In this work, we propose a modular model, Knowledge to Response (K2R), for incorporating knowledge into conversational agents, which breaks down this problem into two easier steps. K2R first generates a knowledge sequence, given a dialogue context, as an intermediate step. After this {\textquotedblleft}reasoning step{\textquotedblright}, the model then attends to its own generated knowledge sequence, as well as the dialogue context, to produce a final response. In detailed experiments, we find that such a model hallucinates less in knowledge-grounded dialogue tasks, and has advantages in terms of interpretability and modularity. In particular, it can be used to fuse QA and dialogue systems together to enable dialogue agents to give knowledgeable answers, or QA models to give conversational responses in a zero-shot setting. | null | null | 10.18653/v1/2022.findings-emnlp.527 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,039 |
inproceedings | vavre-etal-2022-adapting | Adapting Multilingual Models for Code-Mixed Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.528/ | Vavre, Aditya and Gupta, Abhirut and Sarawagi, Sunita | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7133--7141 | The scarcity of gold standard code-mixed to pure language parallel data makes it difficult to train translation models reliably.Prior work has addressed the paucity of parallel data with data augmentation techniques.Such methods rely heavily on external resources making systems difficult to train and scale effectively for multiple languages.We present a simple yet highly effective two-stage back-translation based training scheme for adapting multilingual models to the task of code-mixed translation which eliminates dependence on external resources.We show a substantial improvement in translation quality (measured through BLEU), beating existing prior work by up to +3.8 BLEU on code-mixed Hi$\rightarrow$En, Mr$\rightarrow$En, and Bn$\rightarrow$En tasks. On the LinCE Machine Translation leader board, we achieve the highest score for code-mixed Es$\rightarrow$En, beating existing best baseline by +6.5 BLEU, and our own stronger baseline by +1.1 BLEU. | null | null | 10.18653/v1/2022.findings-emnlp.528 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,040 |
inproceedings | li-etal-2022-lpc | {LPC}: A Logits and Parameter Calibration Framework for Continual Learning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.529/ | Li, Xiaodi and Wang, Zhuoyi and Li, Dingcheng and Khan, Latifur and Thuraisingham, Bhavani | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7142--7155 | When we execute the typical fine-tuning paradigm on continuously sequential tasks, the model will suffer from the catastrophic forgetting problem (i.e., the model tends to adjust old parameters according to the new knowledge, which leads to the loss of previously acquired concepts). People proposed replay-based methods by accessing old data from extra storage and maintaining the parameters of old concepts, which actually raise the privacy issue and larger memory requirements. In this work, we aim to achieve the sequential/continual learning of knowledge without accessing the old data. The core idea is to calibrate the parameters and logits (output) so that preserving old parameters and generalized learning on new concepts can be solved simultaneously. Our proposed framework includes two major components, Logits Calibration (LC) and Parameter Calibration (PC). The LC focuses on calibrating the learning of novel models with old models, and PC aims to preserve the parameters of old models. These two operations can maintain the old knowledge while learning new tasks without storing previous data. We conduct experiments on various scenarios of the GLUE (the General Language Understanding Evaluation) benchmark. The experimental results show that our model achieves state-of-the-art performance in all scenarios. | null | null | 10.18653/v1/2022.findings-emnlp.529 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,041 |
inproceedings | pikuliak-etal-2022-slovakbert | {S}lovak{BERT}: {S}lovak Masked Language Model | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.530/ | Pikuliak, Mat{\'u}{\v{s}} and Grivalsk{\'y}, {\v{S}}tefan and Kon{\^o}pka, Martin and Bl{\v{s}}t{\'a}k, Miroslav and Tamajka, Martin and Bachrat{\'y}, Viktor and Simko, Marian and Bal{\'a}{\v{z}}ik, Pavol and Trnka, Michal and Uhl{\'a}rik, Filip | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7156--7168 | We introduce a new Slovak masked language model called \textit{SlovakBERT}. This is to our best knowledge the first paper discussing Slovak transformers-based language models. We evaluate our model on several NLP tasks and achieve state-of-the-art results. This evaluation is likewise the first attempt to establish a benchmark for Slovak language models. We publish the masked language model, as well as the fine-tuned models for part-of-speech tagging, sentiment analysis and semantic textual similarity. | null | null | 10.18653/v1/2022.findings-emnlp.530 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,042 |
inproceedings | zhang-etal-2022-efficient-zero | Efficient Zero-shot Event Extraction with Context-Definition Alignment | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.531/ | Zhang, Hongming and Yao, Wenlin and Yu, Dong | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7169--7179 | Event extraction (EE) is the task of identifying interested event mentions from text.Conventional efforts mainly focus on the supervised setting. However, these supervised models cannot generalize to event types out of the pre-defined ontology. To fill this gap, many efforts have been devoted to the zero-shot EE problem. This paper follows the trend of modeling event-type semantics but moves one step further. We argue that using the static embedding of the event type name might not be enough because a single word could be ambiguous, and we need a sentence to define the type semantics accurately. To model the definition semantics, we use two separate transformer models to project the contextualized event mentions and corresponding definitions into the same embedding space and then minimize their embedding distance via contrastive learning. On top of that, we also propose a warming phase to help the model learn the minor difference between similar definitions. We name our approach Zero-shot Event extraction with Definition (ZED). Experiments on the MAVEN dataset show that our model significantly outperforms all previous zero-shot EE methods with fast inference speed due to the disjoint design. Further experiments also show that can be easily applied to the few-shot setting when the annotation is available and consistently outperforms baseline supervised methods. | null | null | 10.18653/v1/2022.findings-emnlp.531 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,043 |
inproceedings | jin-etal-2022-logical | Logical Fallacy Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.532/ | Jin, Zhijing and Lalwani, Abhinav and Vaidhya, Tejas and Shen, Xiaoyu and Ding, Yiwen and Lyu, Zhiheng and Sachan, Mrinmaya and Mihalcea, Rada and Schoelkopf, Bernhard | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7180--7198 | Reasoning is central to human intelligence. However, fallacious arguments are common, and some exacerbate problems such as spreading misinformation about climate change. In this paper, we propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text, together with an additional challenge set for detecting logical fallacies in climate change claims (LogicClimate). Detecting logical fallacies is a hard problem as the model must understand the underlying logical structure of the argument. We find that existing pretrained large language models perform poorly on this task. In contrast, we show that a simple structure-aware classifier outperforms the best language model by 5.46{\%} F1 scores on Logic and 4.51{\%} on LogicClimate. We encourage future work to explore this task since (a) it can serve as a new reasoning challenge for language models, and (b) it can have potential applications in tackling the spread of misinformation. Our dataset and code are available at https://github.com/causalNLP/logical-fallacy | null | null | 10.18653/v1/2022.findings-emnlp.532 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,044 |
inproceedings | feng-etal-2022-topic | Topic-Aware Response Generation in Task-Oriented Dialogue with Unstructured Knowledge Access | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.533/ | Feng, Yue and Lampouras, Gerasimos and Iacobacci, Ignacio | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7199--7211 | To alleviate the problem of structured databases' limited coverage, recent task-oriented dialogue systems incorporate external unstructured knowledge to guide the generation of system responses. However, these usually use word or sentence level similarities to detect the relevant knowledge context, which only partially capture the topical level relevance. In this paper, we examine how to better integrate topical information in knowledge grounded task-oriented dialogue and propose {\textquotedblleft}Topic-Aware Response Generation{\textquotedblright} (TARG), an end-to-end response generation model. TARG incorporates multiple topic-aware attention mechanisms to derive the importance weighting scheme over dialogue utterances and external knowledge sources towards a better understanding of the dialogue history. Experimental results indicate that TARG achieves state-of-the-art performance in knowledge selection and response generation, outperforming previous state-of-the-art by 3.2, 3.6, and 4.2 points in EM, F1 and BLEU-4 respectively on Doc2Dial, and performing comparably with previous work on DSTC9; both being knowledge-grounded task-oriented dialogue datasets. | null | null | 10.18653/v1/2022.findings-emnlp.533 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,045 |
inproceedings | dai-etal-2022-revisiting | Revisiting Transformer-based Models for Long Document Classification | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.534/ | Dai, Xiang and Chalkidis, Ilias and Darkner, Sune and Elliott, Desmond | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7212--7230 | The recent literature in text classification is biased towards short text sequences (e.g., sentences or paragraphs). In real-world applications, multi-page multi-paragraph documents are common and they cannot be efficiently encoded by vanilla Transformer-based models. We compare different Transformer-based Long Document Classification (TrLDC) approaches that aim to mitigate the computational overhead of vanilla transformers to encode much longer text, namely sparse attention and hierarchical encoding methods.We examine several aspects of sparse attention (e.g., size of local attention window, use of global attention) and hierarchical (e.g., document splitting strategy) transformers on four document classification datasets covering different domains. We observe a clear benefit from being able to process longer text, and, based on our results, we derive practical advice of applying Transformer-based models on long document classification tasks. | null | null | 10.18653/v1/2022.findings-emnlp.534 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,046 |
inproceedings | cao-wang-2022-time | Time-aware Prompting for Text Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.535/ | Cao, Shuyang and Wang, Lu | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7231--7246 | In this paper, we study the effects of incorporating timestamps, such as document creation dates, into generation systems. Two types of time-aware prompts are investigated: (1) textual prompts that encode document timestamps in natural language sentences; and (2) linear prompts that convert timestamps into continuous vectors. To explore extrapolation to future data points, we further introduce a new data-to-text generation dataset, TempWikiBio, containing more than 4 millions of chronologically ordered revisions of biographical articles from English Wikipedia, each paired with structured personal profiles.Through data-to-text generation on TempWikiBio, text-to-text generation on the content transfer dataset, and summarization on XSum,we show that linear prompts on encoder and textual prompts improve the generation quality on all datasets.Despite having less performance drop when testing on data drawn from a later time, linear prompts focus more on non-temporal information and are less sensitive to the given timestamps, according to human evaluations and sensitivity analyses.Meanwhile, textual prompts establish the association between the given timestamps and the output dates, yielding more factual temporal information in the output. | null | null | 10.18653/v1/2022.findings-emnlp.535 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,047 |
inproceedings | korakakis-vlachos-2022-improving | Improving Scheduled Sampling with Elastic Weight Consolidation for Neural Machine Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.536/ | Korakakis, Michalis and Vlachos, Andreas | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7247--7258 | Despite strong performance in many sequence-to-sequence tasks, autoregressive models trained with maximum likelihood estimation suffer from exposure bias, i.e. the discrepancy between the ground-truth prefixes used during training and the model-generated prefixes used at inference time. Scheduled sampling is a simple and empirically successful approach which addresses this issue by incorporating model-generated prefixes into training. However, it has been argued that it is an inconsistent training objective leading to models ignoring the prefixes altogether. In this paper, we conduct systematic experiments and find that scheduled sampling, while it ameliorates exposure bias by increasing model reliance on the input sequence, worsens performance when the prefix at inference time is correct, a form of catastrophic forgetting. We propose to use Elastic Weight Consolidation to better balance mitigating exposure bias with retaining performance. Experiments on four IWSLT`14 and WMT`14 translation datasets demonstrate that our approach alleviates catastrophic forgetting and significantly outperforms maximum likelihood estimation and scheduled sampling baselines. | null | null | 10.18653/v1/2022.findings-emnlp.536 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,048 |
inproceedings | matsubara-etal-2022-ensemble | Ensemble Transformer for Efficient and Accurate Ranking Tasks: an Application to Question Answering Systems | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.537/ | Matsubara, Yoshitomo and Soldaini, Luca and Lind, Eric and Moschitti, Alessandro | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7259--7272 | Large transformer models can highly improve Answer Sentence Selection (AS2) tasks, but their high computational costs prevent their use in many real-world applications. In this paper, we explore the following research question: How can we make the AS2 models more accurate without significantly increasing their model complexity? To address the question, we propose a Multiple Heads Student architecture (named CERBERUS), an efficient neural network designed to distill an ensemble of large transformers into a single smaller model. CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members. The resulting model captures the knowledge of heterogeneous transformer models by using just a few extra parameters. We show the effectiveness of CERBERUS on three English datasets for AS2; our proposed approach outperforms all single-model distillations we consider, rivaling the state-of-the-art large AS2 models that have 2.7{\texttimes} more parameters and run 2.5{\texttimes} slower. Code for our model is available at https://github.com/amazon-research/wqa-cerberus. | null | null | 10.18653/v1/2022.findings-emnlp.537 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,049 |
inproceedings | xiao-etal-2022-uncertainty | Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.538/ | Xiao, Yuxin and Liang, Paul Pu and Bhatt, Umang and Neiswanger, Willie and Salakhutdinov, Ruslan and Morency, Louis-Philippe | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7273--7284 | Pre-trained language models (PLMs) have gained increasing popularity due to their compelling prediction performance in diverse natural language processing (NLP) tasks. When formulating a PLM-based prediction pipeline for NLP tasks, it is also crucial for the pipeline to minimize the calibration error, especially in safety-critical applications. That is, the pipeline should reliably indicate when we can trust its predictions. In particular, there are various considerations behind the pipeline: (1) the choice and (2) the size of PLM, (3) the choice of uncertainty quantifier, (4) the choice of fine-tuning loss, and many more. Although prior work has looked into some of these considerations, they usually draw conclusions based on a limited scope of empirical studies. There still lacks a holistic analysis on how to compose a well-calibrated PLM-based prediction pipeline. To fill this void, we compare a wide range of popular options for each consideration based on three prevalent NLP classification tasks and the setting of domain shift. In response, we recommend the following: (1) use ELECTRA for PLM encoding, (2) use larger PLMs if possible, (3) use Temp Scaling as the uncertainty quantifier, and (4) use Focal Loss for fine-tuning. | null | null | 10.18653/v1/2022.findings-emnlp.538 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,050 |
inproceedings | feng-etal-2022-represent | How to Represent Context Better? An Empirical Study on Context Modeling for Multi-turn Response Selection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.539/ | Feng, Jiazhan and Tao, Chongyang and Liu, Chang and Yan, Rui and Zhao, Dongyan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7285--7298 | Building retrieval-based dialogue models that can predict appropriate responses based on the understanding of multi-turn context messages is a challenging problem. Early models usually concatenate all utterances or independently encode each dialogue turn, which may lead to an inadequate understanding of dialogue status. Although a few researchers have noticed the importance of context modeling in multi-turn response prediction, there is no systematic comparison to analyze how to model context effectively and no framework to unify those methods. In this paper, instead of configuring new architectures, we investigate how to improve existing models with a better context modeling method. Specifically, we heuristically summarize three categories of turn-aware context modeling strategies which model the context messages from the perspective of sequential relationship, local relationship, and query-aware manner respectively. A Turn-Aware Context Modeling (TACM) layer is explored to flexibly adapt and unify these context modeling strategies to several advanced response selection models. Evaluation results on three public data sets indicate that employing each individual context modeling strategy or multiple strategies can consistently improve the performance of existing models. | null | null | 10.18653/v1/2022.findings-emnlp.539 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,051 |
inproceedings | bhatnagar-etal-2022-chia | {CHIA}: {CH}oosing Instances to Annotate for Machine Translation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.540/ | Bhatnagar, Rajat and Ganesh, Ananya and Kann, Katharina | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7299--7315 | Neural machine translation (MT) systems have been shown to perform poorly on low-resource language pairs, for which large-scale parallel data is unavailable. Making the data annotation process faster and cheaper is therefore important to ensure equitable access to MT systems. To make optimal use of a limited annotation budget, we present CHIA (choosing instances to annotate), a method for selecting instances to annotate for machine translation. Using an existing multi-way parallel dataset of high-resource languages, we first identify instances, based on model training dynamics, that are most informative for training MT models for high-resource languages. We find that there are cross-lingual commonalities in instances that are useful for MT model training, which we use to identify instances that will be useful to train models on a new target language. Evaluating on 20 languages from two corpora, we show that training on instances selected using our method provides an average performance improvement of 1.59 BLEU over training on randomly selected instances of the same size. | null | null | 10.18653/v1/2022.findings-emnlp.540 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,052 |
inproceedings | guo-etal-2022-guiding | Guiding Neural Machine Translation with Semantic Kernels | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.541/ | Guo, Ping and Hu, Yue and Wei, Xiangpeng and Ren, Yubing and Li, Yunpeng and Xing, Luxi and Xie, Yuqiang | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7316--7327 | Machine Translation task has made great progress with the help of auto-regressive decoding paradigm and Transformer architecture. In this paradigm, though the encoder can obtain global source representations, the decoder can only use translation history to determine the current word. Previous promising works attempted to address this issue by applying a draft or a fixed-length semantic embedding as target-side global information. However, these methods either degrade model efficiency or show limitations in expressing semantics. Motivated by Functional Equivalence Theory, we extract several semantic kernels from a source sentence, each of which can express one semantic segment of the original sentence. Together, these semantic kernels can capture global semantic information, and we project them into target embedding space to guide target sentence generation. We further force our model to use semantic kernels at each decoding step through an adaptive mask algorithm. Empirical studies on various machine translation benchmarks show that our approach gains approximately an improvement of 1 BLEU score on most benchmarks over the Transformer baseline and about 1.7 times faster than previous works on average at inference time. | null | null | 10.18653/v1/2022.findings-emnlp.541 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,053 |
inproceedings | li-etal-2022-hismatch | {H}i{SM}atch: Historical Structure Matching based Temporal Knowledge Graph Reasoning | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.542/ | Li, Zixuan and Hou, Zhongni and Guan, Saiping and Jin, Xiaolong and Peng, Weihua and Bai, Long and Lyu, Yajuan and Li, Wei and Guo, Jiafeng and Cheng, Xueqi | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7328--7338 | A Temporal Knowledge Graph (TKG) is a sequence of KGs with respective timestamps, which adopts quadruples in the form of (\textit{subject}, \textit{relation}, \textit{object}, \textit{timestamp}) to describe dynamic facts. TKG reasoning has facilitated many real-world applications via answering such queries as (\textit{query entity}, \textit{query relation}, \textit{?}, \textit{future timestamp}) about future. This is actually a matching task between a query and candidate entities based on their historical structures, which reflect behavioral trends of the entities at different timestamps. In addition, recent KGs provide background knowledge of all the entities, which is also helpful for the matching. Thus, in this paper, we propose the \textbf{Hi}storical \textbf{S}tructure \textbf{Match}ing (\textbf{HiSMatch}) model. It applies two structure encoders to capture the semantic information contained in the historical structures of the query and candidate entities. Besides, it adopts another encoder to integrate the background knowledge into the model. TKG reasoning experiments on six benchmark datasets demonstrate the significant improvement of the proposed HiSMatch model, with up to 5.6{\%} performance improvement in MRR, compared to the state-of-the-art baselines. | null | null | 10.18653/v1/2022.findings-emnlp.542 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,054 |
inproceedings | lin-etal-2022-dependency | Dependency Parsing via Sequence Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.543/ | Lin, Boda and Yao, Zijun and Shi, Jiaxin and Cao, Shulin and Tang, Binghao and Li, Si and Luo, Yong and Li, Juanzi and Hou, Lei | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7339--7353 | Dependency parsing aims to extract syntactic dependency structure or semantic dependency structure for sentences.Existing methods for dependency parsing include transition-based method, graph-based method and sequence-to-sequence method.These methods obtain excellent performance and we notice them belong to labeling method.Therefore, it may be very valuable and interesting to explore the possibility of using generative method to implement dependency parsing.In this paper, we propose to achieve Dependency Parsing (DP) via Sequence Generation (SG) by utilizing only the pre-trained language model without any auxiliary structures.We first explore different serialization designing strategies for converting parsing structures into sequences.Then we design dependency units and concatenate these units into the sequence for DPSG.We verify the DPSG is capable of parsing on widely used DP benchmarks, i.e., PTB, UD2.2, SDP15 and SemEval16.In addition, we also investigate the astonishing low-resource applicability of DPSG, which includes unsupervised cross-domain conducted on CODT and few-shot cross-task conducted on SDP15.Our research demonstrates that sequence generation is one of the effective methods to achieve dependency parsing.Our codes are available now. | null | null | 10.18653/v1/2022.findings-emnlp.543 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,055 |
inproceedings | ivgi-etal-2022-scaling | Scaling Laws Under the Microscope: Predicting Transformer Performance from Small Scale Experiments | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.544/ | Ivgi, Maor and Carmon, Yair and Berant, Jonathan | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7354--7371 | Neural scaling laws define a predictable relationship between a model`s parameter count and its performance after training in the form of a power law. However, most research to date has not explicitly investigated whether scaling laws can be used to accelerate model development. In this work, we perform such an empirical investigation across a wide range of language understanding tasks, starting from models with as few as 10K parameters, and evaluate downstream performance across 9 language understanding tasks.We find that scaling laws emerge at finetuning time in some NLP tasks, and that they can also be exploited for debugging convergence when training large models. Moreover, for tasks where scaling laws exist, they can be used to predict the performance of larger models, which enables effective model selection. However, revealing scaling lawsrequires careful hyperparameter tuning and multiple runs for the purpose of uncertainty estimation, which incurs additional overhead, partially offsetting the computational benefits. | null | null | 10.18653/v1/2022.findings-emnlp.544 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,056 |
inproceedings | bauer-etal-2022-analyzing | Analyzing the Limits of Self-Supervision in Handling Bias in Language | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.545/ | Bauer, Lisa and Gopalakrishnan, Karthik and Gella, Spandana and Liu, Yang and Bansal, Mohit and Hakkani-Tur, Dilek | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7372--7386 | Prompting inputs with natural language task descriptions has emerged as a popular mechanism to elicit reasonably accurate outputs from large-scale generative language models with little to no in-context supervision. This also helps gain insight into how well language models capture the semantics of a wide range of downstream tasks purely from self-supervised pre-training on massive corpora of unlabeled text. Such models have naturally also been exposed to a lot of undesirable content like racist and sexist language and there is only some work on awareness of models along these dimensions. In this paper, we define and comprehensively evaluate how well such language models capture the semantics of four tasks for bias: diagnosis, identification, extraction and rephrasing. We define three broad classes of task descriptions for these tasks: statement, question, and completion, with numerous lexical variants within each class. We study the efficacy of prompting for each task using these classes and the null task description across several decoding methods and few-shot examples. Our analyses indicate that language models are capable of performing these tasks to widely varying degrees across different bias dimensions, such as gender and political affiliation. We believe our work is an important step towards unbiased language models by quantifying the limits of current self-supervision objectives at accomplishing such sociologically challenging tasks. | null | null | 10.18653/v1/2022.findings-emnlp.545 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,057 |
inproceedings | liu-etal-2022-multiple | Multiple Instance Learning for Offensive Language Detection | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.546/ | Liu, Jiexi and Kong, Dehan and Huang, Longtao and Mao, Dinghui and Xue, Hui | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7387--7396 | Automatic offensive language detection has become a crucial issue in recent years. Existing researches on this topic are usually based on a large amount of data annotated at sentence level to train a robust model. However, sentence-level annotations are expensive in practice as the scenario expands, while there exist a large amount of natural labels from historical information on online platforms such as reports and punishments. Notably, these natural labels are usually in bag-level corresponding to the whole documents (articles, user profiles, conversations, etc.). Therefore, we target at proposing an approach capable of utilizing the bag-level labeled data for offensive language detection in this study. For this purpose, we formalize this task into a multiple instance learning (MIL) problem. We break down the design of existing MIL methods and propose a hybrid fusion MIL model with mutual-attention mechanism. In order to verify the validity of the proposed method, we present two new bag-level labeled datasets for offensive language detection: OLID-bags and MINOR. Experimental results based on the proposed datasets demonstrate the effectiveness of the mutual-attention method at both sentence level and bag level. | null | null | 10.18653/v1/2022.findings-emnlp.546 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,058 |
inproceedings | brahman-etal-2022-grounded | Grounded Keys-to-Text Generation: Towards Factual Open-Ended Generation | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.547/ | Brahman, Faeze and Peng, Baolin and Galley, Michel and Rao, Sudha and Dolan, Bill and Chaturvedi, Snigdha and Gao, Jianfeng | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7397--7413 | Large pre-trained language models have recently enabled open-ended generation frameworks (e.g., prompt-to-text NLG) to tackle a variety of tasks going beyond the traditional data-to-text generation. While this framework is more general, it is under-specified and often leads to a lack of controllability restricting their real-world usage. We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages. To address this task, we introduce a new dataset, called EntDeGen. Inspired by recent QA-based evaluation measures, we propose an automatic metric, MAFE, for factual correctness of generated descriptions. Our EntDescriptor model is equipped with strong rankers to fetch helpful passages and generate entity descriptions. Experimental result shows a good correlation (60.14) between our proposed metric and human judgments of factuality. Our rankers significantly improved the factual correctness of generated descriptions (15.95{\%} and 34.51{\%} relative gains in recall and precision). Finally, our ablation study highlights the benefit of combining keys and groundings. | null | null | 10.18653/v1/2022.findings-emnlp.547 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,059 |
inproceedings | kotarcic-etal-2022-human | Human-in-the-Loop Hate Speech Classification in a Multilingual Context | Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue | dec | 2022 | Abu Dhabi, United Arab Emirates | Association for Computational Linguistics | https://aclanthology.org/2022.findings-emnlp.548/ | Kotarcic, Ana and Hangartner, Dominik and Gilardi, Fabrizio and Kurer, Selina and Donnay, Karsten | Findings of the Association for Computational Linguistics: EMNLP 2022 | 7414--7442 | The shift of public debate to the digital sphere has been accompanied by a rise in online hate speech. While many promising approaches for hate speech classification have been proposed, studies often focus only on a single language, usually English, and do not address three key concerns: post-deployment performance, classifier maintenance and infrastructural limitations. In this paper, we introduce a new human-in-the-loop BERT-based hate speech classification pipeline and trace its development from initial data collection and annotation all the way to post-deployment. Our classifier, trained using data from our original corpus of over 422k examples, is specifically developed for the inherently multilingual setting of Switzerland and outperforms with its F1 score of 80.5 the currently best-performing BERT-based multilingual classifier by 5.8 F1 points in German and 3.6 F1 points in French. Our systematic evaluations over a 12-month period further highlight the vital importance of continuous, human-in-the-loop classifier maintenance to ensure robust hate speech classification post-deployment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,060 |
inproceedings | lane-bird-2022-finite | A Finite State Aproach to Interactive Transcription | Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Klyachko, Elena and Neminova, Ekaterina and Vylomova, Ekaterina and Shavrina, Tatiana and Ferrand, Eric Le and Malykh, Valentin and Tyers, Francis and Arkhangelskiy, Timofey and Mikhailov, Vladislav and Fenogenova, Alena | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.fieldmatters-1.1/ | Lane, William and Bird, Steven | Proceedings of the first workshop on NLP applications to field linguistics | 1--10 | We describe a novel approach to transcribing morphologically complex, local, oral languages. The approach connects with local motivations for participating in language work which center on language learning, accessing the content of audio collections, and applying this knowledge in language revitalization and maintenance. We develop a constraint-based approach to interactive word completion, expressed using Optimality Theoretic constraints, implemented in a finite state transducer, and applied to an Indigenous language. We show that this approach suggests correct full word predictions on 57.9{\%} of the test utterances, and correct partial word predictions on 67.5{\%} of the test utterances. In total, 87{\%} of the test utterances receive full or partial word suggestions which serve to guide the interactive transcription process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,062 |
inproceedings | masis-etal-2022-corpus | Corpus-Guided Contrast Sets for Morphosyntactic Feature Detection in Low-Resource {E}nglish Varieties | Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Klyachko, Elena and Neminova, Ekaterina and Vylomova, Ekaterina and Shavrina, Tatiana and Ferrand, Eric Le and Malykh, Valentin and Tyers, Francis and Arkhangelskiy, Timofey and Mikhailov, Vladislav and Fenogenova, Alena | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.fieldmatters-1.2/ | Masis, Tessa and Neal, Anissa and Green, Lisa and O{'}Connor, Brendan | Proceedings of the first workshop on NLP applications to field linguistics | 11--25 | The study of language variation examines how language varies between and within different groups of speakers, shedding light on how we use language to construct identities and how social contexts affect language use. A common method is to identify instances of a certain linguistic feature - say, the zero copula construction - in a corpus, and analyze the feature`s distribution across speakers, topics, and other variables, to either gain a qualitative understanding of the feature`s function or systematically measure variation. In this paper, we explore the challenging task of automatic morphosyntactic feature detection in low-resource English varieties. We present a human-in-the-loop approach to generate and filter effective contrast sets via corpus-guided edits. We show that our approach improves feature detection for both Indian English and African American English, demonstrate how it can assist linguistic research, and release our fine-tuned models for use by other researchers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,063 |
inproceedings | kann-etal-2022-machine | Machine Translation Between High-resource Languages in a Language Documentation Setting | Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Klyachko, Elena and Neminova, Ekaterina and Vylomova, Ekaterina and Shavrina, Tatiana and Ferrand, Eric Le and Malykh, Valentin and Tyers, Francis and Arkhangelskiy, Timofey and Mikhailov, Vladislav and Fenogenova, Alena | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.fieldmatters-1.3/ | Kann, Katharina and Ebrahimi, Abteen and Stenzel, Kristine and Palmer, Alexis | Proceedings of the first workshop on NLP applications to field linguistics | 26--33 | Language documentation encompasses translation, typically into the dominant high-resource language in the region where the target language is spoken. To make data accessible to a broader audience, additional translation into other high-resource languages might be needed. Working within a project documenting Kotiria, we explore the extent to which state-of-the-art machine translation (MT) systems can support this second translation {--} in our case from Portuguese to English. This translation task is challenging for multiple reasons: (1) the data is out-of-domain with respect to the MT system`s training data, (2) much of the data is conversational, (3) existing translations include non-standard and uncommon expressions, often reflecting properties of the documented language, and (4) the data includes borrowings from other regional languages. Despite these challenges, existing MT systems perform at a usable level, though there is still room for improvement. We then conduct a qualitative analysis and suggest ways to improve MT between high-resource languages in a language documentation setting. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,064 |
inproceedings | zaitsev-minchenko-2022-automatic | Automatic Detection of Borrowings in Low-Resource Languages of the {C}aucasus: {A}ndic branch | Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Klyachko, Elena and Neminova, Ekaterina and Vylomova, Ekaterina and Shavrina, Tatiana and Ferrand, Eric Le and Malykh, Valentin and Tyers, Francis and Arkhangelskiy, Timofey and Mikhailov, Vladislav and Fenogenova, Alena | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.fieldmatters-1.4/ | Zaitsev, Konstantin and Minchenko, Anzhelika | Proceedings of the first workshop on NLP applications to field linguistics | 34--41 | Linguistic borrowings occur in all languages. Andic languages of the Caucasus have borrowings from different donor-languages like Russian, Arabic, Persian. To automatically detect these borrowings, we propose a logistic regression model. The model was trained on the dataset which contains words in IPA from dictionaries of Andic languages. To improve model`s quality, we compared TfIdf and Count vectorizers and chose the second one. Besides, we added new features to the model. They were extracted using analysis of vectorizer features and using a language model. The model was evaluated by classification quality metrics (precision, recall and F1-score). The best average F1-score of all languages for words in IPA was about 0.78. Experiments showed that our model reaches good results not only with words in IPA but also with words in Cyrillic. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,065 |
inproceedings | brochhagen-boleda-2022-interaction-cognitive | The interaction between cognitive ease and informativeness shapes the lexicons of natural languages | Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Klyachko, Elena and Neminova, Ekaterina and Vylomova, Ekaterina and Shavrina, Tatiana and Ferrand, Eric Le and Malykh, Valentin and Tyers, Francis and Arkhangelskiy, Timofey and Mikhailov, Vladislav and Fenogenova, Alena | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.fieldmatters-1.5/ | Brochhagen, Thomas and Boleda, Gemma | Proceedings of the first workshop on NLP applications to field linguistics | 42--44 | It is common for languages to express multiple meanings with the same word, a phenomenon known as colexification. For instance, the meanings FINGER and TOE colexify in the word {\textquotedblleft}dedo{\textquotedblright} in Spanish, while they do not colexify in English. Colexification has been suggested to follow universal constraints. In particular, previous work has shown that related meanings are more prone to colexify. This tendency has been explained in terms of the cognitive pressure for ease, since expressing related meanings with the same word makes lexicons easier to learn and use. The present study examines the interplay between this pressure and a competing universal constraint, the functional pressure for languages to maximize informativeness. We hypothesize that meanings are more likely to colexify if they are related (fostering ease), but not so related as to become confusable and cause misunderstandings (fostering informativeness). We find support for this principle in data from over 1200 languages and 1400 meanings. Our results thus suggest that universal principles shape the lexicons of natural languages. More broadly, they contribute to the growing body of evidence suggesting that languages evolve to strike a balance between competing functional and cognitive pressures. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,066 |
inproceedings | dale-2022-first | The first neural machine translation system for the {E}rzya language | Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Klyachko, Elena and Neminova, Ekaterina and Vylomova, Ekaterina and Shavrina, Tatiana and Ferrand, Eric Le and Malykh, Valentin and Tyers, Francis and Arkhangelskiy, Timofey and Mikhailov, Vladislav and Fenogenova, Alena | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.fieldmatters-1.6/ | Dale, David | Proceedings of the first workshop on NLP applications to field linguistics | 45--53 | We present the first neural machine translation system for translation between the endangered Erzya language and Russian and the dataset collected by us to train and evaluate it. The BLEU scores are 17 and 19 for translation to Erzya and Russian respectively, and more than half of the translations are rated as acceptable by native speakers. We also adapt our model to translate between Erzya and 10 other languages, but without additional parallel data, the quality on these directions remains low. We release the translation models along with the collected text corpus, a new language identification model, and a multilingual sentence encoder adapted for the Erzya language. These resources will be available at \url{https://github.com/slone-nlp/myv-nmt}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,067 |
inproceedings | kratochvil-morgado-da-costa-2022-abui | {A}bui {W}ordnet: Using a Toolbox Dictionary to develop a wordnet for a low-resource language | Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Klyachko, Elena and Neminova, Ekaterina and Vylomova, Ekaterina and Shavrina, Tatiana and Ferrand, Eric Le and Malykh, Valentin and Tyers, Francis and Arkhangelskiy, Timofey and Mikhailov, Vladislav and Fenogenova, Alena | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.fieldmatters-1.7/ | Kratochvil, Frantisek and Morgado da Costa, Lu{\'i}s | Proceedings of the first workshop on NLP applications to field linguistics | 54--63 | This paper describes a procedure to link a Toolbox dictionary of a low-resource language to correct synsets, generating a new wordnet. We introduce a bootstrapping technique utilising the information in the gloss fields (English, national, and regional) to generate sense candidates using a naive algorithm based on multilingual sense intersection. We show that this technique is quite effective when glosses are available in more than one language. Our technique complements the previous work by Rosman et al. (2014) which linked the SIL Semantic Domains to wordnet senses. Through this work we have created a small, fully hand-checked wordnet for Abui, containing over 1,400 concepts and 3,600 senses. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,068 |
inproceedings | schwartz-etal-2022-encode | How to encode arbitrarily complex morphology in word embeddings, no corpus needed | Serikov, Oleg and Voloshina, Ekaterina and Postnikova, Anna and Klyachko, Elena and Neminova, Ekaterina and Vylomova, Ekaterina and Shavrina, Tatiana and Ferrand, Eric Le and Malykh, Valentin and Tyers, Francis and Arkhangelskiy, Timofey and Mikhailov, Vladislav and Fenogenova, Alena | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.fieldmatters-1.8/ | Schwartz, Lane and Haley, Coleman and Tyers, Francis | Proceedings of the first workshop on NLP applications to field linguistics | 64--76 | In this paper, we present a straightforward technique for constructing interpretable word embeddings from morphologically analyzed examples (such as interlinear glosses) for all of the world`s languages. Currently, fewer than 300-400 languages out of approximately 7000 have have more than a trivial amount of digitized texts; of those, between 100-200 languages (most in the Indo-European language family) have enough text data for BERT embeddings of reasonable quality to be trained. The word embeddings in this paper are explicitly designed to be both linguistically interpretable and fully capable of handling the broad variety found in the world`s diverse set of 7000 languages, regardless of corpus size or morphological characteristics. We demonstrate the applicability of our representation through examples drawn from a typologically diverse set of languages whose morphology includes prefixes, suffixes, infixes, circumfixes, templatic morphemes, derivational morphemes, inflectional morphemes, and reduplication. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 27,069 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.