entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | lee-etal-2022-good | Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource {NER} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.192/ | Lee, Dong-Ho and Kadakia, Akshen and Tan, Kangmin and Agarwal, Mahak and Feng, Xinyu and Shibuya, Takashi and Mitani, Ryosuke and Sekiya, Toshiyuki and Pujara, Jay and Ren, Xiang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2687--2700 | Recent advances in prompt-based learning have shown strong results on few-shot text classification by using cloze-style templates. Similar attempts have been made on named entity recognition (NER) which manually design templates to predict entity types for every text span in a sentence. However, such methods may suffer from error propagation induced by entity span detection, high cost due to enumeration of all possible text spans, and omission of inter-dependencies among token labels in a sentence. Here we present a simple demonstration-based learning method for NER, which lets the input be prefaced by task demonstrations for in-context learning. We perform a systematic study on demonstration strategy regarding what to include (entity examples, with or without surrounding context), how to select the examples, and what templates to use. Results on in-domain learning and domain adaptation show that the model`s performance in low-resource settings can be largely improved with a suitable demonstration strategy (e.g., a 4-17{\%} improvement on 25 train instances). We also find that good demonstration can save many labeled examples and consistency in demonstration contributes to better performance. | null | null | 10.18653/v1/2022.acl-long.192 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,858 |
inproceedings | fu-etal-2022-contextual | Contextual Representation Learning beyond Masked Language Modeling | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.193/ | Fu, Zhiyi and Zhou, Wangchunshu and Xu, Jingjing and Zhou, Hao and Li, Lei | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2701--2714 | Currently, masked language modeling (e.g., BERT) is the prime choice to learn contextualized representations. Due to the pervasiveness, it naturally raises an interesting question: how do masked language models (MLMs) learn contextual representations? In this work, we analyze the learning dynamics of MLMs and find that it adopts sampled embeddings as anchors to estimate and inject contextual semantics to representations, which limits the efficiency and effectiveness of MLMs. To address these problems, we propose TACO, a simple yet effective representation learning approach to directly model global semantics. To be specific, TACO extracts and aligns contextual semantics hidden in contextualized representations to encourage models to attend global semantics when generating contextualized representations. Experiments on the GLUE benchmark show that TACO achieves up to 5x speedup and up to 1.2 points average improvement over MLM. | null | null | 10.18653/v1/2022.acl-long.193 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,859 |
inproceedings | zhang-etal-2022-efficient | Efficient Hyper-parameter Search for Knowledge Graph Embedding | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.194/ | Zhang, Yongqi and Zhou, Zhanke and Yao, Quanming and Li, Yong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2715--2735 | While hyper-parameters (HPs) are important for knowledge graph (KG) learning, existing methods fail to search them efficiently. To solve this problem, we first analyze the properties of different HPs and measure the transfer ability from small subgraph to the full graph. Based on the analysis, we propose an efficient two-stage search algorithm KGTuner, which efficiently explores HP configurations on small subgraph at the first stage and transfers the top-performed configurations for fine-tuning on the large full graph at the second stage. Experiments show that our method can consistently find better HPs than the baseline algorithms within the same time budget, which achieves 9.1{\%} average relative improvement for four embedding models on the large-scale KGs in open graph benchmark. Our code is released in \url{https://github.com/AutoML-Research/KGTuner}. | null | null | 10.18653/v1/2022.acl-long.194 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,860 |
inproceedings | ning-etal-2022-meta | A Meta-framework for Spatiotemporal Quantity Extraction from Text | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.195/ | Ning, Qiang and Zhou, Ben and Wu, Hao and Peng, Haoruo and Fan, Chuchu and Gardner, Matt | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2736--2749 | News events are often associated with quantities (e.g., the number of COVID-19 patients or the number of arrests in a protest), and it is often important to extract their type, time, and location from unstructured text in order to analyze these quantity events. This paper thus formulates the NLP problem of spatiotemporal quantity extraction, and proposes the first meta-framework for solving it. This meta-framework contains a formalism that decomposes the problem into several information extraction tasks, a shareable crowdsourcing pipeline, and transformer-based baseline models. We demonstrate the meta-framework in three domains{---}the COVID-19 pandemic, Black Lives Matter protests, and 2020 California wildfires{---}to show that the formalism is general and extensible, the crowdsourcing pipeline facilitates fast and high-quality data annotation, and the baseline system can handle spatiotemporal quantity extraction well enough to be practically useful. We release all resources for future research on this topic at \url{https://github.com/steqe}. | null | null | 10.18653/v1/2022.acl-long.195 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,861 |
inproceedings | jin-etal-2022-leveraging | Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-Modal Knowledge Transfer | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.196/ | Jin, Woojeong and Lee, Dong-Ho and Zhu, Chenguang and Pujara, Jay and Ren, Xiang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2750--2762 | Pre-trained language models are still far from human performance in tasks that need understanding of properties (e.g. appearance, measurable quantity) and affordances of everyday objects in the real world since the text lacks such information due to reporting bias. In this work, we study whether integrating visual knowledge into a language model can fill the gap. We investigate two types of knowledge transfer: (1) \textit{text knowledge transfer using image captions that may contain enriched visual knowledge and (2) \textit{cross-modal knowledge transfer} using both images and captions with vision-language training objectives.On 5 downstream tasks that may need visual knowledge to solve the problem, we perform extensive empirical comparisons over the presented objectives.Our experiments show that visual knowledge transfer can improve performance in both low-resource and fully supervised settings.} | null | null | 10.18653/v1/2022.acl-long.196 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,862 |
inproceedings | jin-etal-2022-good | A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.197/ | Jin, Woojeong and Cheng, Yu and Shen, Yelong and Chen, Weizhu and Ren, Xiang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2763--2775 | Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning. However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed. To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FewVLM, relatively smaller than recent few-shot learners. For FewVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM).Furthermore, we analyze the effect of diverse prompts for few-shot tasks. Experimental results on VQA show that FewVLM with prompt-based learning outperforms Frozen which is 31x larger than FewVLM by 18.2{\%} point and achieves comparable results to a 246x larger model, PICa.In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at \url{https://github.com/woojeongjin/FewVLM} | null | null | 10.18653/v1/2022.acl-long.197 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,863 |
inproceedings | qin-joty-2022-continual | Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.198/ | Qin, Chengwei and Joty, Shafiq | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2776--2789 | Existing continual relation learning (CRL) methods rely on plenty of labeled training data for learning a new task, which can be hard to acquire in real scenario as getting large and representative labeled data is often expensive and time-consuming. It is therefore necessary for the model to learn novel relational patterns with very few labeled data while avoiding catastrophic forgetting of previous task knowledge. In this paper, we formulate this challenging yet practical problem as continual few-shot relation learning (CFRL). Based on the finding that learning for new emerging few-shot tasks often results in feature distributions that are incompatible with previous tasks' learned distributions, we propose a novel method based on embedding space regularization and data augmentation. Our method generalizes to new few-shot tasks and avoids catastrophic forgetting of previous tasks by enforcing extra constraints on the relational embeddings and by adding extra relevant data in a self-supervised manner. With extensive experiments we demonstrate that our method can significantly outperform previous state-of-the-art methods in CFRL task settings. | null | null | 10.18653/v1/2022.acl-long.198 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,864 |
inproceedings | li-etal-2022-variational | Variational Graph Autoencoding as Cheap Supervision for {AMR} Coreference Resolution | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.199/ | Li, Irene and Song, Linfeng and Xu, Kun and Yu, Dong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2790--2800 | Coreference resolution over semantic graphs like AMRs aims to group the graph nodes that represent the same entity. This is a crucial step for making document-level formal semantic representations. With annotated data on AMR coreference resolution, deep learning approaches have recently shown great potential for this task, yet they are usually data hunger and annotations are costly. We propose a general pretraining method using variational graph autoencoder (VGAE) for AMR coreference resolution, which can leverage any general AMR corpus and even automatically parsed AMR data. Experiments on benchmarks show that the pretraining approach achieves performance gains of up to 6{\%} absolute F1 points. Moreover, our model significantly improves on the previous state-of-the-art model by up to 11{\%} F1. | null | null | 10.18653/v1/2022.acl-long.199 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,865 |
inproceedings | zhang-etal-2022-identifying | Identifying {C}hinese Opinion Expressions with Extremely-Noisy Crowdsourcing Annotations | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.200/ | Zhang, Xin and Xu, Guangwei and Sun, Yueheng and Zhang, Meishan and Wang, Xiaobin and Zhang, Min | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2801--2813 | Recent works of opinion expression identification (OEI) rely heavily on the quality and scale of the manually-constructed training corpus, which could be extremely difficult to satisfy. Crowdsourcing is one practical solution for this problem, aiming to create a large-scale but quality-unguaranteed corpus. In this work, we investigate Chinese OEI with extremely-noisy crowdsourcing annotations, constructing a dataset at a very low cost. Following Zhang el al. (2021), we train the annotator-adapter model by regarding all annotations as gold-standard in terms of crowd annotators, and test the model by using a synthetic expert, which is a mixture of all annotators. As this annotator-mixture for testing is never modeled explicitly in the training phase, we propose to generate synthetic training samples by a pertinent mixup strategy to make the training and testing highly consistent. The simulation experiments on our constructed dataset show that crowdsourcing is highly promising for OEI, and our proposed annotator-mixup can further enhance the crowdsourcing modeling. | null | null | 10.18653/v1/2022.acl-long.200 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,866 |
inproceedings | saxena-etal-2022-sequence | Sequence-to-Sequence Knowledge Graph Completion and Question Answering | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.201/ | Saxena, Apoorv and Kochsiek, Adrian and Gemulla, Rainer | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2814--2828 | Knowledge graph embedding (KGE) models represent each entity and relation of a knowledge graph (KG) with low-dimensional embedding vectors. These methods have recently been applied to KG link prediction and question answering over incomplete KGs (KGQA). KGEs typically create an embedding for each entity in the graph, which results in large model sizes on real-world graphs with millions of entities. For downstream tasks these atomic entity representations often need to be integrated into a multi stage pipeline, limiting their utility. We show that an off-the-shelf encoder-decoder Transformer model can serve as a scalable and versatile KGE model obtaining state-of-the-art results for KG link prediction and incomplete KG question answering. We achieve this by posing KG link prediction as a sequence-to-sequence task and exchange the triple scoring approach taken by prior KGE methods with autoregressive decoding. Such a simple but powerful method reduces the model size up to 98{\%} compared to conventional KGE models while keeping inference time tractable. After finetuning this model on the task of KGQA over incomplete KGs, our approach outperforms baselines on multiple large-scale datasets without extensive hyperparameter tuning. | null | null | 10.18653/v1/2022.acl-long.201 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,867 |
inproceedings | bao-etal-2022-learning | Learning to Mediate Disparities Towards Pragmatic Communication | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.202/ | Bao, Yuwei and Ghosh, Sayan and Chai, Joyce | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2829--2842 | Human communication is a collaborative process. Speakers, on top of conveying their own intent, adjust the content and language expressions by taking the listeners into account, including their knowledge background, personalities, and physical capabilities. Towards building AI agents with similar abilities in language communication, we propose a novel rational reasoning framework, Pragmatic Rational Speaker (PRS), where the speaker attempts to learn the speaker-listener disparity and adjust the speech accordingly, by adding a light-weighted disparity adjustment layer into working memory on top of speaker`s long-term memory system. By fixing the long-term memory, the PRS only needs to update its working memory to learn and adapt to different types of listeners. To validate our framework, we create a dataset that simulates different types of speaker-listener disparities in the context of referential games. Our empirical results demonstrate that the PRS is able to shift its output towards the language that listeners are able to understand, significantly improve the collaborative task outcome, and learn the disparity more efficiently than joint training. | null | null | 10.18653/v1/2022.acl-long.202 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,868 |
inproceedings | gao-callan-2022-unsupervised | Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.203/ | Gao, Luyu and Callan, Jamie | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2843--2853 | Recent research demonstrates the effectiveness of using fine-tuned language models (LM) for dense retrieval. However, dense retrievers are hard to train, typically requiring heavily engineered fine-tuning pipelines to realize their full potential. In this paper, we identify and address two underlying problems of dense retrievers: i) fragility to training data noise and ii) requiring large batches to robustly learn the embedding space. We use the recently proposed Condenser pre-training architecture, which learns to condense information into the dense vector through LM pre-training. On top of it, we propose coCondenser, which adds an unsupervised corpus-level contrastive loss to warm up the passage embedding space. Experiments on MS-MARCO, Natural Question, and Trivia QA datasets show that coCondenser removes the need for heavy data engineering such as augmentation, synthesis, or filtering, and the need for large batch training. It shows comparable performance to RocketQA, a state-of-the-art, heavily engineered system, using simple small batch fine-tuning. | null | null | 10.18653/v1/2022.acl-long.203 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,869 |
inproceedings | sun-etal-2022-multimodal | Multimodal Dialogue Response Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.204/ | Sun, Qingfeng and Wang, Yujing and Xu, Can and Zheng, Kai and Yang, Yaming and Hu, Huang and Xu, Fei and Zhang, Jessica and Geng, Xiubo and Jiang, Daxin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2854--2866 | Responsing with image has been recognized as an important capability for an intelligent conversational agent. Yet existing works only focus on exploring the multimodal dialogue models which depend on retrieval-based methods, but neglecting generation methods. To fill in the gaps, we first present a new task: multimodal dialogue response generation (MDRG) - given the dialogue history, one model needs to generate a text sequence or an image as response. Learning such a MDRG model often requires multimodal dialogues containing both texts and images which are difficult to obtain. Motivated by the challenge in practice, we consider MDRG under a natural assumption that only limited training examples are available. In such a low-resource setting, we devise a novel conversational agent, Divter, in order to isolate parameters that depend on multimodal dialogues from the entire generation model. By this means, the major part of the model can be learned from a large number of text-only dialogues and text-image pairs respectively, then the whole parameters can be well fitted using the limited training examples. Extensive experiments demonstrate our method achieves state-of-the-art results in both automatic and human evaluation, and can generate informative text and high-resolution image responses. | null | null | 10.18653/v1/2022.acl-long.204 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,870 |
inproceedings | niu-etal-2022-cake | {CAKE}: A Scalable Commonsense-Aware Framework For Multi-View Knowledge Graph Completion | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.205/ | Niu, Guanglin and Li, Bo and Zhang, Yongfei and Pu, Shiliang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2867--2877 | Knowledge graphs store a large number of factual triples while they are still incomplete, inevitably. The previous knowledge graph completion (KGC) models predict missing links between entities merely relying on fact-view data, ignoring the valuable commonsense knowledge. The previous knowledge graph embedding (KGE) techniques suffer from invalid negative sampling and the uncertainty of fact-view link prediction, limiting KGC`s performance. To address the above challenges, we propose a novel and scalable Commonsense-Aware Knowledge Embedding (CAKE) framework to automatically extract commonsense from factual triples with entity concepts. The generated commonsense augments effective self-supervision to facilitate both high-quality negative sampling (NS) and joint commonsense and fact-view link prediction. Experimental results on the KGC task demonstrate that assembling our framework could enhance the performance of the original KGE models, and the proposed commonsense-aware NS module is superior to other NS techniques. Besides, our proposed framework could be easily adaptive to various KGE models and explain the predicted results. | null | null | 10.18653/v1/2022.acl-long.205 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,871 |
inproceedings | zhou-etal-2022-confidence | Confidence Based Bidirectional Global Context Aware Training Framework for Neural Machine Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.206/ | Zhou, Chulun and Meng, Fandong and Zhou, Jie and Zhang, Min and Wang, Hongji and Su, Jinsong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2878--2889 | Most dominant neural machine translation (NMT) models are restricted to make predictions only according to the local context of preceding words in a left-to-right manner. Although many previous studies try to incorporate global information into NMT models, there still exist limitations on how to effectively exploit bidirectional global context. In this paper, we propose a Confidence Based Bidirectional Global Context Aware (CBBGCA) training framework for NMT, where the NMT model is jointly trained with an auxiliary conditional masked language model (CMLM). The training consists of two stages: (1) multi-task joint training; (2) confidence based knowledge distillation. At the first stage, by sharing encoder parameters, the NMT model is additionally supervised by the signal from the CMLM decoder that contains bidirectional global contexts. Moreover, at the second stage, using the CMLM as teacher, we further pertinently incorporate bidirectional global context to the NMT model on its unconfidently-predicted target words via knowledge distillation. Experimental results show that our proposed CBBGCA training framework significantly improves the NMT model by +1.02, +1.30 and +0.57 BLEU scores on three large-scale translation datasets, namely WMT`14 English-to-German, WMT`19 Chinese-to-English and WMT`14 English-to-French, respectively. | null | null | 10.18653/v1/2022.acl-long.206 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,872 |
inproceedings | liu-etal-2022-brio | {BRIO}: Bringing Order to Abstractive Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.207/ | Liu, Yixin and Liu, Pengfei and Radev, Dragomir and Neubig, Graham | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2890--2903 | Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary. This assumption may lead to performance degradation during inference, where the model needs to compare several system-generated (candidate) summaries that have deviated from the reference summary. To address this problem, we propose a novel training paradigm which assumes a non-deterministic distribution so that different candidate summaries are assigned probability mass according to their quality. Our method achieves a new state-of-the-art result on the CNN/DailyMail (47.78 ROUGE-1) and XSum (49.07 ROUGE-1) datasets. Further analysis also shows that our model can estimate probabilities of candidate summaries that are more correlated with their level of quality. | null | null | 10.18653/v1/2022.acl-long.207 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,873 |
inproceedings | ai-fang-2022-leveraging | Leveraging Relaxed Equilibrium by Lazy Transition for Sequence Modeling | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.208/ | Ai, Xi and Fang, Bin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2904--2924 | In sequence modeling, certain tokens are usually less ambiguous than others, and representations of these tokens require fewer refinements for disambiguation. However, given the nature of attention-based models like Transformer and UT (universal transformer), all tokens are equally processed towards depth. Inspired by the equilibrium phenomenon, we present a lazy transition, a mechanism to adjust the significance of iterative refinements for each token representation. Our lazy transition is deployed on top of UT to build LT (lazy transformer), where all tokens are processed unequally towards depth. Eventually, LT is encouraged to oscillate around a \textit{relaxed} equilibrium. Our experiments show that LT outperforms baseline models on several tasks of machine translation, pre-training, Learning to Execute, and LAMBADA. | null | null | 10.18653/v1/2022.acl-long.208 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,874 |
inproceedings | castro-etal-2022-fiber | {FIBER}: Fill-in-the-Blanks as a Challenging Video Understanding Evaluation Framework | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.209/ | Castro, Santiago and Wang, Ruoyao and Huang, Pingxuan and Stewart, Ian and Ignat, Oana and Liu, Nan and Stroud, Jonathan and Mihalcea, Rada | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2925--2940 | We propose fill-in-the-blanks as a video understanding evaluation framework and introduce FIBER {--} a novel dataset consisting of 28,000 videos and descriptions in support of this evaluation framework. The fill-in-the-blanks setting tests a model`s understanding of a video by requiring it to predict a masked noun phrase in the caption of the video, given the video and the surrounding text. The FIBER benchmark does not share the weaknesses of the current state-of-the-art language-informed video understanding tasks, namely: (1) video question answering using multiple-choice questions, where models perform relatively well because they exploit linguistic biases in the task formulation, thus making our framework challenging for the current state-of-the-art systems to solve; and (2) video captioning, which relies on an open-ended evaluation framework that is often inaccurate because system answers may be perceived as incorrect if they differ in form from the ground truth. The FIBER dataset and our code are available at \url{https://lit.eecs.umich.edu/fiber/}. | null | null | 10.18653/v1/2022.acl-long.209 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,875 |
inproceedings | wang-etal-2022-kenmesh | {K}en{M}e{SH}: Knowledge-enhanced End-to-end Biomedical Text Labelling | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.210/ | Wang, Xindi and Mercer, Robert and Rudzicz, Frank | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2941--2951 | Currently, Medical Subject Headings (MeSH) are manually assigned to every biomedical article published and subsequently recorded in the PubMed database to facilitate retrieving relevant information. With the rapid growth of the PubMed database, large-scale biomedical document indexing becomes increasingly important. MeSH indexing is a challenging task for machine learning, as it needs to assign multiple labels to each article from an extremely large hierachically organized collection. To address this challenge, we propose KenMeSH, an end-to-end model that combines new text features and a dynamic knowledge-enhanced mask attention that integrates document features with MeSH label hierarchy and journal correlation features to index MeSH terms. Experimental results show the proposed method achieves state-of-the-art performance on a number of measures. | null | null | 10.18653/v1/2022.acl-long.210 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,876 |
inproceedings | svikhnushina-etal-2022-taxonomy | A Taxonomy of Empathetic Questions in Social Dialogs | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.211/ | Svikhnushina, Ekaterina and Voinea, Iuliana and Welivita, Anuradha and Pu, Pearl | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2952--2973 | Effective question-asking is a crucial component of a successful conversational chatbot. It could help the bots manifest empathy and render the interaction more engaging by demonstrating attention to the speaker`s emotions. However, current dialog generation approaches do not model this subtle emotion regulation technique due to the lack of a taxonomy of questions and their purpose in social chitchat. To address this gap, we have developed an empathetic question taxonomy (EQT), with special attention paid to questions' ability to capture communicative acts and their emotion-regulation intents. We further design a crowd-sourcing task to annotate a large subset of the EmpatheticDialogues dataset with the established labels. We use the crowd-annotated data to develop automatic labeling tools and produce labels for the whole dataset. Finally, we employ information visualization techniques to summarize co-occurrences of question acts and intents and their role in regulating interlocutor`s emotion. These results reveal important question-asking strategies in social dialogs. The EQT classification scheme can facilitate computational analysis of questions in datasets. More importantly, it can inform future efforts in empathetic question generation using neural or hybrid methods. | null | null | 10.18653/v1/2022.acl-long.211 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,877 |
inproceedings | chen-etal-2022-enhanced | Enhanced Multi-Channel Graph Convolutional Network for Aspect Sentiment Triplet Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.212/ | Chen, Hao and Zhai, Zepeng and Feng, Fangxiang and Li, Ruifan and Wang, Xiaojie | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2974--2985 | Aspect Sentiment Triplet Extraction (ASTE) is an emerging sentiment analysis task. Most of the existing studies focus on devising a new tagging scheme that enables the model to extract the sentiment triplets in an end-to-end fashion. However, these methods ignore the relations between words for ASTE task. In this paper, we propose an Enhanced Multi-Channel Graph Convolutional Network model (EMC-GCN) to fully utilize the relations between words. Specifically, we first define ten types of relations for ASTE task, and then adopt a biaffine attention module to embed these relations as an adjacent tensor between words in a sentence. After that, our EMC-GCN transforms the sentence into a multi-channel graph by treating words and the relation adjacent tensor as nodes and edges, respectively. Thus, relation-aware node representations can be learnt. Furthermore, we consider diverse linguistic features to enhance our EMC-GCN model. Finally, we design an effective refining strategy on EMC-GCN for word-pair representation refinement, which considers the implicit results of aspect and opinion extraction when determining whether word pairs match or not. Extensive experimental results on the benchmark datasets demonstrate that the effectiveness and robustness of our proposed model, which outperforms state-of-the-art methods significantly. | null | null | 10.18653/v1/2022.acl-long.212 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,878 |
inproceedings | das-etal-2022-prototex | {P}roto{TE}x: Explaining Model Decisions with Prototype Tensors | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.213/ | Das, Anubrata and Gupta, Chitrank and Kovatchev, Venelin and Lease, Matthew and Li, Junyi Jessy | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2986--2997 | We present ProtoTEx, a novel white-box NLP classification architecture based on prototype networks (Li et al., 2018). ProtoTEx faithfully explains model decisions based on prototype tensors that encode latent clusters of training examples. At inference time, classification decisions are based on the distances between the input text and the prototype tensors, explained via the training examples most similar to the most influential prototypes. We also describe a novel interleaved training algorithm that effectively handles classes characterized by ProtoTEx indicative features. On a propaganda detection task, ProtoTEx accuracy matches BART-large and exceeds BERTlarge with the added benefit of providing faithful explanations. A user study also shows that prototype-based explanations help non-experts to better recognize propaganda in online news. | null | null | 10.18653/v1/2022.acl-long.213 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,879 |
inproceedings | zhou-etal-2022-show | Show Me More Details: Discovering Hierarchies of Procedures from Semi-structured Web Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.214/ | Zhou, Shuyan and Zhang, Li and Yang, Yue and Lyu, Qing and Yin, Pengcheng and Callison-Burch, Chris and Neubig, Graham | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 2998--3012 | Procedures are inherently hierarchical. To {\textquotedblleft}make videos{\textquotedblright}, one may need to {\textquotedblleft}purchase a camera{\textquotedblright}, which in turn may require one to {\textquotedblleft}set a budget{\textquotedblright}. While such hierarchical knowledge is critical for reasoning about complex procedures, most existing work has treated procedures as shallow structures without modeling the parent-child relation. In this work, we attempt to construct an open-domain hierarchical knowledge-base (KB) of procedures based on wikiHow, a website containing more than 110k instructional articles, each documenting the steps to carry out a complex procedure. To this end, we develop a simple and efficient method that links steps (e.g., {\textquotedblleft}purchase a camera{\textquotedblright}) in an article to other articles with similar goals (e.g., {\textquotedblleft}how to choose a camera{\textquotedblright}), recursively constructing the KB. Our method significantly outperforms several strong baselines according to automatic evaluation, human judgment, and application to downstream tasks such as instructional video retrieval. | null | null | 10.18653/v1/2022.acl-long.214 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,880 |
inproceedings | liu-etal-2022-cross | Cross-Modal Discrete Representation Learning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.215/ | Liu, Alexander and Jin, SouYoung and Lai, Cheng-I and Rouditchenko, Andrew and Oliva, Aude and Glass, James | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3013--3035 | In contrast to recent advances focusing on high-level representation learning across modalities, in this work we present a self-supervised learning framework that is able to learn a representation that captures finer levels of granularity across different modalities such as concepts or events represented by visual objects or spoken words. Our framework relies on a discretized embedding space created via vector quantization that is shared across different modalities. Beyond the shared embedding space, we propose a Cross-Modal Code Matching objective that forces the representations from different views (modalities) to have a similar distribution over the discrete embedding space such that cross-modal objects/actions localization can be performed without direct supervision. We show that the proposed discretized multi-modal fine-grained representation (e.g., pixel/word/frame) can complement high-level summary representations (e.g., video/sentence/waveform) for improved performance on cross-modal retrieval tasks. We also observe that the discretized representation uses individual clusters to represent the same semantic concept across modalities. | null | null | 10.18653/v1/2022.acl-long.215 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,881 |
inproceedings | gao-etal-2022-improving | Improving Event Representation via Simultaneous Weakly Supervised Contrastive Learning and Clustering | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.216/ | Gao, Jun and Wang, Wei and Yu, Changlong and Zhao, Huan and Ng, Wilfred and Xu, Ruifeng | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3036--3049 | Representations of events described in text are important for various tasks. In this work, we present SWCC: a Simultaneous Weakly supervised Contrastive learning and Clustering framework for event representation learning. SWCC learns event representations by making better use of co-occurrence information of events. Specifically, we introduce a weakly supervised contrastive learning method that allows us to consider multiple positives and multiple negatives, and a prototype-based clustering method that avoids semantically related events being pulled apart. For model training, SWCC learns representations by simultaneously performing weakly supervised contrastive learning and prototype-based clustering. Experimental results show that SWCC outperforms other baselines on Hard Similarity and Transitive Sentence Similarity tasks. In addition, a thorough analysis of the prototype-based clustering method demonstrates that the learned prototype vectors are able to implicitly capture various relations between events. | null | null | 10.18653/v1/2022.acl-long.216 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,882 |
inproceedings | wolfe-caliskan-2022-contrastive | Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.217/ | Wolfe, Robert and Caliskan, Aylin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3050--3061 | We examine the effects of contrastive visual semantic pretraining by comparing the geometry and semantic properties of contextualized English language representations formed by GPT-2 and CLIP, a zero-shot multimodal image classifier which adapts the GPT-2 architecture to encode image captions. We find that contrastive visual semantic pretraining significantly mitigates the anisotropy found in contextualized word embeddings from GPT-2, such that the intra-layer self-similarity (mean pairwise cosine similarity) of CLIP word embeddings is under .25 in all layers, compared to greater than .95 in the top layer of GPT-2. CLIP word embeddings outperform GPT-2 on word-level semantic intrinsic evaluation tasks, and achieve a new corpus-based state of the art for the RG65 evaluation, at .88. CLIP also forms fine-grained semantic representations of sentences, and obtains Spearman`s $\rho = .73$ on the SemEval-2017 Semantic Textual Similarity Benchmark with no fine-tuning, compared to no greater than $\rho = .45$ in any layer of GPT-2. Finally, intra-layer self-similarity of CLIP sentence embeddings decreases as the layer index increases, finishing at .25 in the top layer, while the self-similarity of GPT-2 sentence embeddings formed using the EOS token increases layer-over-layer and never falls below .97. Our results indicate that high anisotropy is not an inevitable consequence of contextualization, and that visual semantic pretraining is beneficial not only for ordering visual representations, but also for encoding useful semantic representations of language, both on the word level and the sentence level. | null | null | 10.18653/v1/2022.acl-long.217 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,883 |
inproceedings | yin-etal-2022-contintin | {C}on{T}in{T}in: Continual Learning from Task Instructions | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.218/ | Yin, Wenpeng and Li, Jia and Xiong, Caiming | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3062--3072 | The mainstream machine learning paradigms for NLP often work with two underlying presumptions. First, the target task is predefined and static; a system merely needs to learn to solve it exclusively. Second, the supervision of a task mainly comes from a set of labeled examples. A question arises: how to build a system that can keep learning new tasks from their instructions?This work defines a new learning paradigm ConTinTin (Continual Learning from Task Instructions), in which a system should learn a sequence of new tasks one by one, each task is explained by a piece of textual instruction. The system is required to (i) generate the expected outputs of a new task by learning from its instruction, (ii) transfer the knowledge acquired from upstream tasks to help solve downstream tasks (i.e., forward-transfer), and (iii) retain or even improve the performance on earlier tasks after learning new tasks (i.e., backward-transfer). This new problem is studied on a stream of more than 60 tasks, each equipped with an instruction. Technically, our method InstructionSpeak contains two strategies that make full use of task instructions to improve forward-transfer and backward-transfer: one is to learn from negative outputs, the other is to re-visit instructions of previous tasks. To our knowledge, this is the first time to study ConTinTin in NLP. In addition to the problem formulation and our promising approach, this work also contributes to providing rich analyses for the community to better understand this novel learning problem. | null | null | 10.18653/v1/2022.acl-long.218 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,884 |
inproceedings | wallace-etal-2022-automated | Automated Crossword Solving | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.219/ | Wallace, Eric and Tomlin, Nicholas and Xu, Albert and Yang, Kevin and Pathak, Eshaan and Ginsberg, Matthew and Klein, Dan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3073--3085 | We present the Berkeley Crossword Solver, a state-of-the-art approach for automatically solving crossword puzzles. Our system works by generating answer candidates for each crossword clue using neural question answering models and then combines loopy belief propagation with local search to find full puzzle solutions. Compared to existing approaches, our system improves exact puzzle accuracy from 57{\%} to 82{\%} on crosswords from The New York Times and obtains 99.9{\%} letter accuracy on themeless puzzles. Our system also won first place at the top human crossword tournament, which marks the first time that a computer program has surpassed human performance at this event. To facilitate research on question answering and crossword solving, we analyze our system`s remaining errors and release a dataset of over six million question-answer pairs. | null | null | 10.18653/v1/2022.acl-long.219 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,885 |
inproceedings | kitaev-etal-2022-learned | Learned Incremental Representations for Parsing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.220/ | Kitaev, Nikita and Lu, Thomas and Klein, Dan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3086--3095 | We present an incremental syntactic representation that consists of assigning a single discrete label to each word in a sentence, where the label is predicted using strictly incremental processing of a prefix of the sentence, and the sequence of labels for a sentence fully determines a parse tree. Our goal is to induce a syntactic representation that commits to syntactic choices only as they are incrementally revealed by the input, in contrast with standard representations that must make output choices such as attachments speculatively and later throw out conflicting analyses. Our learned representations achieve 93.72 F1 on the Penn Treebank with as few as 5 bits per word, and at 8 bits per word they achieve 94.97 F1, which is comparable with other state of the art parsing models when using the same pre-trained embeddings. We also provide an analysis of the representations learned by our system, investigating properties such as the interpretable syntactic features captured by the system and mechanisms for deferred resolution of syntactic ambiguities. | null | null | 10.18653/v1/2022.acl-long.220 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,886 |
inproceedings | shen-etal-2022-knowledge | Knowledge Enhanced Reflection Generation for Counseling Dialogues | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.221/ | Shen, Siqi and Perez-Rosas, Veronica and Welch, Charles and Poria, Soujanya and Mihalcea, Rada | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3096--3107 | In this paper, we study the effect of commonsense and domain knowledge while generating responses in counseling conversations using retrieval and generative methods for knowledge integration. We propose a pipeline that collects domain knowledge through web mining, and show that retrieval from both domain-specific and commonsense knowledge bases improves the quality of generated responses. We also present a model that incorporates knowledge generated by COMET using soft positional encoding and masked self-attention. We show that both retrieved and COMET-generated knowledge improve the system`s performance as measured by automatic metrics and also by human evaluation. Lastly, we present a comparative study on the types of knowledge encoded by our system showing that \textit{causal} and \textit{intentional} relationships benefit the generation task more than other types of commonsense relations. | null | null | 10.18653/v1/2022.acl-long.221 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,887 |
inproceedings | gabriel-etal-2022-misinfo | Misinfo Reaction Frames: Reasoning about Readers' Reactions to News Headlines | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.222/ | Gabriel, Saadia and Hallinan, Skyler and Sap, Maarten and Nguyen, Pemi and Roesner, Franziska and Choi, Eunsol and Choi, Yejin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3108--3127 | Even to a simple and short news headline, readers react in a multitude of ways: cognitively (e.g. inferring the writer`s intent), emotionally (e.g. feeling distrust), and behaviorally (e.g. sharing the news with their friends). Such reactions are instantaneous and yet complex, as they rely on factors that go beyond interpreting factual content of news. We propose Misinfo Reaction Frames (MRF), a pragmatic formalism for modeling how readers might react to a news headline. In contrast to categorical schema, our free-text dimensions provide a more nuanced way of understanding intent beyond being benign or malicious. We also introduce a Misinfo Reaction Frames corpus, a crowdsourced dataset of reactions to over 25k news headlines focusing on global crises: the Covid-19 pandemic, climate change, and cancer. Empirical results confirm that it is indeed possible for neural models to predict the prominent patterns of readers' reactions to previously unseen news headlines. Additionally, our user study shows that displaying machine-generated MRF implications alongside news headlines to readers can increase their trust in real news while decreasing their trust in misinformation. Our work demonstrates the feasibility and importance of pragmatic inferences on news headlines to help enhance AI-guided misinformation detection and mitigation. | null | null | 10.18653/v1/2022.acl-long.222 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,888 |
inproceedings | lin-etal-2022-continual | On Continual Model Refinement in Out-of-Distribution Data Streams | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.223/ | Lin, Bill Yuchen and Wang, Sida and Lin, Xi and Jia, Robin and Xiao, Lin and Ren, Xiang and Yih, Scott | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3128--3139 | Real-world natural language processing (NLP) models need to be continually updated to fix the prediction errors in out-of-distribution (OOD) data streams while overcoming catastrophic forgetting. However, existing continual learning (CL) problem setups cannot cover such a realistic and complex scenario. In response to this, we propose a new CL problem formulation dubbed continual model refinement (CMR). Compared to prior CL settings, CMR is more practical and introduces unique challenges (boundary-agnostic and non-stationary distribution shift, diverse mixtures of multiple OOD data clusters, error-centric streams, etc.). We extend several existing CL approaches to the CMR setting and evaluate them extensively. For benchmarking and analysis, we propose a general sampling algorithm to obtain dynamic OOD data streams with controllable non-stationarity, as well as a suite of metrics measuring various aspects of online performance. Our experiments and detailed analysis reveal the promise and challenges of the CMR problem, supporting that studying CMR in dynamic OOD streams can benefit the longevity of deployed NLP models in production. | null | null | 10.18653/v1/2022.acl-long.223 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,889 |
inproceedings | majumder-etal-2022-achieving | Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.224/ | Majumder, Bodhisattwa Prasad and Jhamtani, Harsh and Berg-Kirkpatrick, Taylor and McAuley, Julian | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3140--3153 | A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses, primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge. One way to alleviate this issue is to extract relevant knowledge from external sources at decoding time and incorporate it into the dialog response. In this paper, we propose a post-hoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model. We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step. Our experiments in goal-oriented and knowledge-grounded dialog settings demonstrate that human annotators judge the outputs from the proposed method to be more engaging and informative compared to responses from prior dialog systems. We further show that knowledge-augmentation promotes success in achieving conversational goals in both experimental settings. | null | null | 10.18653/v1/2022.acl-long.224 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,890 |
inproceedings | liu-etal-2022-generated | Generated Knowledge Prompting for Commonsense Reasoning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.225/ | Liu, Jiacheng and Liu, Alisa and Lu, Ximing and Welleck, Sean and West, Peter and Le Bras, Ronan and Choi, Yejin and Hajishirzi, Hannaneh | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3154--3169 | It remains an open question whether incorporating external knowledge benefits commonsense reasoning while maintaining the flexibility of pretrained sequence models. To investigate this question, we develop generated knowledge prompting, which consists of generating knowledge from a language model, then providing the knowledge as additional input when answering a question. Our method does not require task-specific supervision for knowledge integration, or access to a structured knowledge base, yet it improves performance of large-scale, state-of-the-art models on four commonsense reasoning tasks, achieving state-of-the-art results on numerical commonsense (NumerSense), general commonsense (CommonsenseQA 2.0), and scientific commonsense (QASC) benchmarks. Generated knowledge prompting highlights large-scale language models as flexible sources of external knowledge for improving commonsense reasoning. Our code is available at \url{github.com/liujch1998/GKP} | null | null | 10.18653/v1/2022.acl-long.225 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,891 |
inproceedings | wang-etal-2022-training | Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.226/ | Wang, Shuohang and Xu, Yichong and Fang, Yuwei and Liu, Yang and Sun, Siqi and Xu, Ruochen and Zhu, Chenguang and Zeng, Michael | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3170--3179 | Retrieval-based methods have been shown to be effective in NLP tasks via introducing external knowledge. However, the indexing and retrieving of large-scale corpora bring considerable computational cost. Surprisingly, we found that REtrieving from the traINing datA (REINA) only can lead to significant gains on multiple NLG and NLU tasks. We retrieve the labeled training instances most similar to the input text and then concatenate them with the input to feed into the model to generate the output. Experimental results show that this simple method can achieve significantly better performance on a variety of NLU and NLG tasks, including summarization, machine translation, language modeling, and question answering tasks. For instance, our proposed method achieved state-of-the-art results on XSum, BigPatent, and CommonsenseQA. Our code is released, \url{https://github.com/microsoft/REINA} . | null | null | 10.18653/v1/2022.acl-long.226 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,892 |
inproceedings | lialin-etal-2022-life | Life after {BERT}: What do Other Muppets Understand about Language? | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.227/ | Lialin, Vladislav and Zhao, Kevin and Shivagunde, Namrata and Rumshisky, Anna | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3180--3193 | Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives. In our work, we utilize the oLMpics bench- mark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART, and ALBERT. Additionally, we adapt the oLMpics zero-shot setup for autoregres- sive models and evaluate GPT networks of different sizes. Our findings show that none of these models can resolve compositional questions in a zero-shot fashion, suggesting that this skill is not learnable using existing pre-training objectives. Furthermore, we find that global model decisions such as architecture, directionality, size of the dataset, and pre-training objective are not predictive of a model`s linguistic capabilities. | null | null | 10.18653/v1/2022.acl-long.227 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,893 |
inproceedings | ross-etal-2022-tailor | Tailor: Generating and Perturbing Text with Semantic Controls | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.228/ | Ross, Alexis and Wu, Tongshuang and Peng, Hao and Peters, Matthew and Gardner, Matt | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3194--3213 | Controlled text perturbation is useful for evaluating and improving model generalizability. However, current techniques rely on training a model for every target perturbation, which is expensive and hard to generalize. We present Tailor, a semantically-controlled text generation system. Tailor builds on a pretrained seq2seq model and produces textual outputs conditioned on control codes derived from semantic representations. We craft a set of operations to modify the control codes, which in turn steer generation towards targeted attributes. These operations can be further composed into higher-level ones, allowing for flexible perturbation strategies. We demonstrate the effectiveness of these perturbations in multiple applications. First, we use Tailor to automatically create high-quality contrast sets for four distinct natural language processing (NLP) tasks. These contrast sets contain fewer spurious artifacts and are complementary to manually annotated ones in their lexical diversity. Second, we show that Tailor perturbations can improve model generalization through data augmentation. Perturbing just {\ensuremath{\sim}}2{\%} of training data leads to a 5.8-point gain on an NLI challenge set measuring reliance on syntactic heuristics. | null | null | 10.18653/v1/2022.acl-long.228 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,894 |
inproceedings | lin-etal-2022-truthfulqa | {T}ruthful{QA}: Measuring How Models Mimic Human Falsehoods | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.229/ | Lin, Stephanie and Hilton, Jacob and Evans, Owain | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3214--3252 | We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58{\%} of questions, while human performance was 94{\%}. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web. | null | null | 10.18653/v1/2022.acl-long.229 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,895 |
inproceedings | ribeiro-lundberg-2022-adaptive | Adaptive Testing and Debugging of {NLP} Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.230/ | Ribeiro, Marco Tulio and Lundberg, Scott | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3253--3267 | Current approaches to testing and debugging NLP models rely on highly variable human creativity and extensive labor, or only work for a very restrictive class of bugs. We present AdaTest, a process which uses large scale language models (LMs) in partnership with human feedback to automatically write unit tests highlighting bugs in a target model. Such bugs are then addressed through an iterative text-fix-retest loop, inspired by traditional software development. In experiments with expert and non-expert users and commercial / research models for 8 different tasks, AdaTest makes users 5-10x more effective at finding bugs than current approaches, and helps users effectively fix bugs without adding new bugs. | null | null | 10.18653/v1/2022.acl-long.230 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,896 |
inproceedings | gupta-etal-2022-right | Right for the Right Reason: Evidence Extraction for Trustworthy Tabular Reasoning | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.231/ | Gupta, Vivek and Zhang, Shuo and Vempala, Alakananda and He, Yujie and Choji, Temma and Srikumar, Vivek | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3268--3283 | When pre-trained contextualized embedding-based models developed for unstructured data are adapted for structured tabular data, they perform admirably. However, recent probing studies show that these models use spurious correlations, and often predict inference labels by focusing on false evidence or ignoring it altogether. To study this issue, we introduce the task of Trustworthy Tabular Reasoning, where a model needs to extract evidence to be used for reasoning, in addition to predicting the label. As a case study, we propose a two-stage sequential prediction approach, which includes an evidence extraction and an inference stage. First, we crowdsource evidence row labels and develop several unsupervised and supervised evidence extraction strategies for InfoTabS, a tabular NLI benchmark. Our evidence extraction strategy outperforms earlier baselines. On the downstream tabular inference task, using only the automatically extracted evidence as the premise, our approach outperforms prior benchmarks. | null | null | 10.18653/v1/2022.acl-long.231 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,897 |
inproceedings | lane-etal-2022-interactive | Interactive Word Completion for {P}lains {C}ree | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.232/ | Lane, William and Harrigan, Atticus and Arppe, Antti | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3284--3294 | The composition of richly-inflected words in morphologically complex languages can be a challenge for language learners developing literacy. Accordingly, Lane and Bird (2020) proposed a finite state approach which maps prefixes in a language to a set of possible completions up to the next morpheme boundary, for the incremental building of complex words. In this work, we develop an approach to morph-based auto-completion based on a finite state morphological analyzer of Plains Cree (n{\^e}hiyaw{\^e}win), showing the portability of the concept to a much larger, more complete morphological transducer. Additionally, we propose and compare various novel ranking strategies on the morph auto-complete output. The best weighting scheme ranks the target completion in the top 10 results in 64.9{\%} of queries, and in the top 50 in 73.9{\%} of queries. | null | null | 10.18653/v1/2022.acl-long.232 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,898 |
inproceedings | jambor-bahdanau-2022-lagr | {LAG}r: Label Aligned Graphs for Better Systematic Generalization in Semantic Parsing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.233/ | Jambor, Dora and Bahdanau, Dzmitry | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3295--3308 | Semantic parsing is the task of producing structured meaning representations for natural language sentences. Recent research has pointed out that the commonly-used sequence-to-sequence (seq2seq) semantic parsers struggle to generalize systematically, i.e. to handle examples that require recombining known knowledge in novel settings. In this work, we show that better systematic generalization can be achieved by producing the meaning representation directly as a graph and not as a sequence. To this end we propose LAGr (Label Aligned Graphs), a general framework to produce semantic parses by independently predicting node and edge labels for a complete multi-layer input-aligned graph. The strongly-supervised LAGr algorithm requires aligned graphs as inputs, whereas weakly-supervised LAGr infers alignments for originally unaligned target graphs using approximate maximum-a-posteriori inference. Experiments demonstrate that LAGr achieves significant improvements in systematic generalization upon the baseline seq2seq parsers in both strongly- and weakly-supervised settings. | null | null | 10.18653/v1/2022.acl-long.233 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,899 |
inproceedings | hartvigsen-etal-2022-toxigen | {T}oxi{G}en: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.234/ | Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3309--3326 | Toxic language detection systems often falsely flag text that contains minority group mentions as toxic, as those groups are often the targets of online hate. Such over-reliance on spurious correlations also causes systems to struggle with detecting implicitly toxic language. To help mitigate these issues, we create ToxiGen, a new large-scale and machine-generated dataset of 274k toxic and benign statements about 13 minority groups. We develop a demonstration-based prompting framework and an adversarial classifier-in-the-loop decoding method to generate subtly toxic and benign text with a massive pretrained language model. Controlling machine generation in this way allows ToxiGen to cover implicitly toxic text at a larger scale, and about more demographic groups, than previous resources of human-written text. We conduct a human evaluation on a challenging subset of ToxiGen and find that annotators struggle to distinguish machine-generated text from human-written language. We also find that 94.5{\%} of toxic examples are labeled as hate speech by human annotators. Using three publicly-available datasets, we show that finetuning a toxicity classifier on our data improves its performance on human-written data substantially. We also demonstrate that ToxiGen can be used to fight machine-generated toxicity as finetuning improves the classifier significantly on our evaluation subset. | null | null | 10.18653/v1/2022.acl-long.234 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,900 |
inproceedings | lee-etal-2022-direct | Direct Speech-to-Speech Translation With Discrete Units | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.235/ | Lee, Ann and Chen, Peng-Jen and Wang, Changhan and Gu, Jiatao and Popuri, Sravya and Ma, Xutai and Polyak, Adam and Adi, Yossi and He, Qing and Tang, Yun and Pino, Juan and Hsu, Wei-Ning | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3327--3339 | We present a direct speech-to-speech translation (S2ST) model that translates speech from one language to speech in another language without relying on intermediate text generation. We tackle the problem by first applying a self-supervised discrete speech encoder on the target speech and then training a sequence-to-sequence speech-to-unit translation (S2UT) model to predict the discrete representations of the target speech. When target text transcripts are available, we design a joint speech and text training framework that enables the model to generate dual modality output (speech and text) simultaneously in the same inference pass. Experiments on the Fisher Spanish-English dataset show that the proposed framework yields improvement of 6.7 BLEU compared with a baseline direct S2ST model that predicts spectrogram features. When trained without any text transcripts, our model performance is comparable to models that predict spectrograms and are trained with text supervision, showing the potential of our system for translation between unwritten languages. | null | null | 10.18653/v1/2022.acl-long.235 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,901 |
inproceedings | cao-etal-2022-hallucinated | Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.236/ | Cao, Meng and Dong, Yue and Cheung, Jackie | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3340--3354 | State-of-the-art abstractive summarization systems often generate hallucinations; i.e., content that is not directly inferable from the source text. Despite being assumed to be incorrect, we find that much hallucinated content is actually consistent with world knowledge, which we call factual hallucinations. Including these factual hallucinations in a summary can be beneficial because they provide useful background information. In this work, we propose a novel detection approach that separates factual from non-factual hallucinations of entities. Our method is based on an entity`s prior and posterior probabilities according to pre-trained and finetuned masked language models, respectively. Empirical results suggest that our method vastly outperforms two baselines in both accuracy and F1 scores and has a strong correlation with human judgments on factuality classification tasks. Furthermore, we use our method as a reward signal to train a summarization system using an off-line reinforcement learning (RL) algorithm that can significantly improve the factuality of generated summaries while maintaining the level of abstractiveness. | null | null | 10.18653/v1/2022.acl-long.236 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,902 |
inproceedings | maddela-etal-2022-entsum | {E}nt{SUM}: A Data Set for Entity-Centric Extractive Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.237/ | Maddela, Mounica and Kulkarni, Mayank and Preotiuc-Pietro, Daniel | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3355--3366 | Controllable summarization aims to provide summaries that take into account user-specified aspects and preferences to better assist them with their information need, as opposed to the standard summarization setup which build a single generic summary of a document. We introduce a human-annotated data set EntSUM for controllable summarization with a focus on named entities as the aspects to control. We conduct an extensive quantitative analysis to motivate the task of entity-centric summarization and show that existing methods for controllable summarization fail to generate entity-centric summaries. We propose extensions to state-of-the-art summarization approaches that achieve substantially better results on our data set. Our analysis and results show the challenging nature of this task and of the proposed data set. | null | null | 10.18653/v1/2022.acl-long.237 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,903 |
inproceedings | meehan-etal-2022-sentence | Sentence-level Privacy for Document Embeddings | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.238/ | Meehan, Casey and Mrini, Khalil and Chaudhuri, Kamalika | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3367--3380 | User language data can contain highly sensitive personal content. As such, it is imperative to offer users a strong and interpretable privacy guarantee when learning from their data. In this work we propose SentDP, pure local differential privacy at the sentence level for a single user document. We propose a novel technique, DeepCandidate, that combines concepts from robust statistics and language modeling to produce high (768) dimensional, general $\epsilon$-SentDP document embeddings. This guarantees that any single sentence in a document can be substituted with any other sentence while keeping the embedding $\epsilon$-indistinguishable. Our experiments indicate that these private document embeddings are useful for downstream tasks like sentiment analysis and topic classification and even outperform baseline methods with weaker guarantees like word-level Metric DP. | null | null | 10.18653/v1/2022.acl-long.238 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,904 |
inproceedings | faisal-etal-2022-dataset | Dataset Geography: Mapping Language Data to Language Users | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.239/ | Faisal, Fahim and Wang, Yinkai and Anastasopoulos, Antonios | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3381--3411 | As language technologies become more ubiquitous, there are increasing efforts towards expanding the language diversity and coverage of natural language processing (NLP) systems. Arguably, the most important factor influencing the quality of modern NLP systems is data availability. In this work, we study the geographical representativeness of NLP datasets, aiming to quantify if and by how much do NLP datasets match the expected needs of the language speakers. In doing so, we use entity recognition and linking systems, also making important observations about their cross-lingual consistency and giving suggestions for more robust evaluation. Last, we explore some geographical and economic factors that may explain the observed dataset distributions. | null | null | 10.18653/v1/2022.acl-long.239 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,905 |
inproceedings | varshney-etal-2022-ildae | {ILDAE}: Instance-Level Difficulty Analysis of Evaluation Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.240/ | Varshney, Neeraj and Mishra, Swaroop and Baral, Chitta | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3412--3425 | Knowledge of difficulty level of questions helps a teacher in several ways, such as estimating students' potential quickly by asking carefully selected questions and improving quality of examination by modifying trivial and hard questions. Can we extract such benefits of instance difficulty in Natural Language Processing? To this end, we conduct Instance-Level Difficulty Analysis of Evaluation data (ILDAE) in a large-scale setup of 23 datasets and demonstrate its five novel applications: 1) conducting efficient-yet-accurate evaluations with fewer instances saving computational cost and time, 2) improving quality of existing evaluation datasets by repairing erroneous and trivial instances, 3) selecting the best model based on application requirements, 4) analyzing dataset characteristics for guiding future data creation, 5) estimating Out-of-Domain performance reliably. Comprehensive experiments for these applications lead to several interesting results, such as evaluation using just 5{\%} instances (selected via ILDAE) achieves as high as 0.93 Kendall correlation with evaluation using complete dataset and computing weighted accuracy using difficulty scores leads to 5.2{\%} higher correlation with Out-of-Domain performance. We release the difficulty scores and hope our work will encourage research in this important yet understudied field of leveraging instance difficulty in evaluations. | null | null | 10.18653/v1/2022.acl-long.240 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,906 |
inproceedings | krojer-etal-2022-image | Image Retrieval from Contextual Descriptions | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.241/ | Krojer, Benno and Adlakha, Vaibhav and Vineet, Vibhav and Goyal, Yash and Ponti, Edoardo and Reddy, Siva | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3426--3440 | The ability to integrate context, including perceptual and temporal cues, plays a pivotal role in grounding the meaning of a linguistic utterance. In order to measure to what extent current vision-and-language models master this ability, we devise a new multimodal challenge, Image Retrieval from Contextual Descriptions (ImageCoDe). In particular, models are tasked with retrieving the correct image from a set of 10 minimally contrastive candidates based on a contextual description. As such, each description contains only the details that help distinguish between images. Because of this, descriptions tend to be complex in terms of syntax and discourse and require drawing pragmatic inferences. Images are sourced from both static pictures and video frames. We benchmark several state-of-the-art models, including both cross-encoders such as ViLBERT and bi-encoders such as CLIP, on ImageCoDe.Our results reveal that these models dramatically lag behind human performance: the best variant achieves an accuracy of 20.9 on video frames and 59.4 on static pictures, compared with 90.8 in humans. Furthermore, we experiment with new model variants that are better equipped to incorporate visual and temporal context into their representations, which achieve modest gains. Our hope is that ImageCoDE will foster progress in grounded language understanding by encouraging models to focus on fine-grained visual differences. | null | null | 10.18653/v1/2022.acl-long.241 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,907 |
inproceedings | guo-etal-2022-multilingual | Multilingual Molecular Representation Learning via Contrastive Pre-training | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.242/ | Guo, Zhihui and Sharma, Pramod and Martinez, Andy and Du, Liang and Abraham, Robin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3441--3453 | Molecular representation learning plays an essential role in cheminformatics. Recently, language model-based approaches have gained popularity as an alternative to traditional expert-designed features to encode molecules. However, these approaches only utilize a single molecular language for representation learning. Motivated by the fact that a given molecule can be described using different languages such as Simplified Molecular Line Entry System (SMILES), The International Union of Pure and Applied Chemistry (IUPAC), and The IUPAC International Chemical Identifier (InChI), we propose a multilingual molecular embedding generation approach called MM-Deacon (multilingual molecular domain embedding analysis via contrastive learning). MM-Deacon is pre-trained using SMILES and IUPAC as two different languages on large-scale molecules. We evaluated the robustness of our method on seven molecular property prediction tasks from MoleculeNet benchmark, zero-shot cross-lingual retrieval, and a drug-drug interaction prediction task. | null | null | 10.18653/v1/2022.acl-long.242 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,908 |
inproceedings | renduchintala-williams-2022-investigating | Investigating Failures of Automatic Translationin the Case of Unambiguous Gender | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.243/ | Renduchintala, Adithya and Williams, Adina | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3454--3469 | Transformer-based models are the modern work horses for neural machine translation (NMT), reaching state of the art across several benchmarks. Despite their impressive accuracy, we observe a systemic and rudimentary class of errors made by current state-of-the-art NMT models with regards to translating from a language that doesn`t mark gender on nouns into others that do. We find that even when the surrounding context provides unambiguous evidence of the appropriate grammatical gender marking, no tested model was able to accurately gender occupation nouns systematically. We release an evaluation scheme and dataset for measuring the ability of NMT models to translate gender morphology correctly in unambiguous contexts across syntactically diverse sentences. Our dataset translates from an English source into 20 languages from several different language families. With the availability of this dataset, our hope is that the NMT community can iterate on solutions for this class of especially egregious errors. | null | null | 10.18653/v1/2022.acl-long.243 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,909 |
inproceedings | mishra-etal-2022-cross | Cross-Task Generalization via Natural Language Crowdsourcing Instructions | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.244/ | Mishra, Swaroop and Khashabi, Daniel and Baral, Chitta and Hajishirzi, Hannaneh | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3470--3487 | Humans (e.g., crowdworkers) have a remarkable ability in solving different tasks, by simply reading textual instructions that define them and looking at a few examples. Despite the success of the conventional supervised learning on individual datasets, such models often struggle with generalization across tasks (e.g., a question-answering system cannot solve classification tasks). A long-standing challenge in AI is to build a model that learns a new task by understanding the human-readable instructions that define it. To study this, we introduce NATURAL INSTRUCTIONS, a dataset of 61 distinct tasks, their human-authored instructions, and 193k task instances (input-output pairs). The instructions are obtained from crowdsourcing instructions used to create existing NLP datasets and mapped to a unified schema. Using this meta-dataset, we measure cross-task generalization by training models on seen tasks and measuring generalization to the remaining unseen ones. We adopt generative pre-trained language models to encode task-specific instructions along with input and generate task output. Our results indicate that models benefit from instructions when evaluated in terms of generalization to unseen tasks (19{\%} better for models utilizing instructions). These models, however, are far behind an estimated performance upperbound indicating significant room for more progress in this direction. | null | null | 10.18653/v1/2022.acl-long.244 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,910 |
inproceedings | chen-etal-2022-imputing | Imputing Out-of-Vocabulary Embeddings with {LOVE} Makes {L}anguage{M}odels Robust with Little Cost | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.245/ | Chen, Lihu and Varoquaux, Gael and Suchanek, Fabian | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3488--3504 | State-of-the-art NLP systems represent inputs with word embeddings, but these are brittle when faced with Out-of-Vocabulary (OOV) words. To address this issue, we follow the principle of mimick-like models to generate vectors for unseen words, by learning the behavior of pre-trained embeddings using only the surface form of words. We present a simple contrastive learning framework, LOVE, which extends the word representation of an existing pre-trained language model (such as BERT) and makes it robust to OOV with few additional parameters. Extensive evaluations demonstrate that our lightweight model achieves similar or even better performances than prior competitors, both on original datasets and on corrupted variants. Moreover, it can be used in a plug-and-play fashion with FastText and BERT, where it significantly improves their robustness. | null | null | 10.18653/v1/2022.acl-long.245 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,911 |
inproceedings | mishra-etal-2022-numglue | {N}um{GLUE}: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.246/ | Mishra, Swaroop and Mitra, Arindam and Varshney, Neeraj and Sachdeva, Bhavdeep and Clark, Peter and Baral, Chitta and Kalyan, Ashwin | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3505--3523 | Given the ubiquitous nature of numbers in text, reasoning with numbers to perform simple calculations is an important skill of AI systems. While many datasets and models have been developed to this end, state-of-the-art AI systems are brittle; failing to perform the underlying mathematical reasoning when they appear in a slightly different scenario. Drawing inspiration from GLUE that was proposed in the context of natural language understanding, we propose NumGLUE, a multi-task benchmark that evaluates the performance of AI systems on eight different tasks, that at their core require simple arithmetic understanding. We show that this benchmark is far from being solved with neural models including state-of-the-art large-scale language models performing significantly worse than humans (lower by 46.4 {\%}). Further, NumGLUE promotes sharing knowledge across tasks, especially those with limited training data as evidenced by the superior performance (average gain of 3.4 {\%} on each task) when a model is jointly trained on all the tasks as opposed to task-specific modeling. Finally, we hope that NumGLUE will encourage systems that perform robust and general arithmetic reasoning within language, a first step towards being able to perform more complex mathematical reasoning. | null | null | 10.18653/v1/2022.acl-long.246 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,912 |
inproceedings | steed-etal-2022-upstream | {U}pstream {M}itigation {I}s \textit{ {N}ot} {A}ll {Y}ou {N}eed: {T}esting the {B}ias {T}ransfer {H}ypothesis in {P}re-{T}rained {L}anguage {M}odels | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.247/ | Steed, Ryan and Panda, Swetasudha and Kobren, Ari and Wick, Michael | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3524--3542 | A few large, homogenous, pre-trained models undergird many machine learning systems {---} and often, these models contain harmful stereotypes learned from the internet. We investigate the \textit{bias transfer hypothesis}: the theory that social biases (such as stereotypes) internalized by large language models during pre-training transfer into harmful task-specific behavior after fine-tuning. For two classification tasks, we find that reducing intrinsic bias with controlled interventions \textit{before} fine-tuning does little to mitigate the classifier`s discriminatory behavior \textit{after} fine-tuning. Regression analysis suggests that downstream disparities are better explained by biases in the fine-tuning dataset. Still, pre-training plays a role: simple alterations to co-occurrence rates in the fine-tuning dataset are ineffective when the model has been pre-trained. Our results encourage practitioners to focus more on dataset quality and context-specific harms. | null | null | 10.18653/v1/2022.acl-long.247 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,913 |
inproceedings | zhang-etal-2022-improving-multi | Improving Multi-label Malevolence Detection in Dialogues through Multi-faceted Label Correlation Enhancement | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.248/ | Zhang, Yangjun and Ren, Pengjie and Deng, Wentao and Chen, Zhumin and de Rijke, Maarten | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3543--3555 | A dialogue response is malevolent if it is grounded in negative emotions, inappropriate behavior, or an unethical value basis in terms of content and dialogue acts. The detection of malevolent dialogue responses is attracting growing interest. Current research on detecting dialogue malevolence has limitations in terms of datasets and methods. First, available dialogue datasets related to malevolence are labeled with a single category, but in practice assigning a single category to each utterance may not be appropriate as some malevolent utterances belong to multiple labels. Second, current methods for detecting dialogue malevolence neglect label correlation. Therefore, we propose the task of multi-label dialogue malevolence detection and crowdsource a multi-label dataset, multi-label dialogue malevolence detection (MDMD) for evaluation. We also propose a multi-label malevolence detection model, multi-faceted label correlation enhanced CRF (MCRF), with two label correlation mechanisms, label correlation in taxonomy (LCT) and label correlation in context (LCC). Experiments on MDMD show that our method outperforms the best performing baseline by a large margin, i.e., 16.1{\%}, 11.9{\%}, 12.0{\%}, and 6.1{\%} on precision, recall, F1, and Jaccard score, respectively. | null | null | 10.18653/v1/2022.acl-long.248 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,914 |
inproceedings | xu-etal-2022-answer | How Do We Answer Complex Questions: Discourse Structure of Long-form Answers | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.249/ | Xu, Fangyuan and Li, Junyi Jessy and Choi, Eunsol | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3556--3572 | Long-form answers, consisting of multiple sentences, can provide nuanced and comprehensive answers to a broader set of questions. To better understand this complex and understudied task, we study the functional structure of long-form answers collected from three datasets, ELI5, WebGPT and Natural Questions. Our main goal is to understand how humans organize information to craft complex answers. We develop an ontology of six sentence-level functional roles for long-form answers, and annotate 3.9k sentences in 640 answer paragraphs. Different answer collection methods manifest in different discourse structures. We further analyze model-generated answers {--} finding that annotators agree less with each other when annotating model-generated answers compared to annotating human-written answers. Our annotated data enables training a strong classifier that can be used for automatic analysis. We hope our work can inspire future research on discourse-level modeling and evaluation of long-form QA systems. | null | null | 10.18653/v1/2022.acl-long.249 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,915 |
inproceedings | du-etal-2022-understanding-iterative | Understanding Iterative Revision from Human-Written Text | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.250/ | Du, Wanyu and Raheja, Vipul and Kumar, Dhruv and Kim, Zae Myung and Lopez, Melissa and Kang, Dongyeop | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3573--3590 | Writing is, by nature, a strategic, adaptive, and, more importantly, an iterative process. A crucial part of writing is editing and revising the text. Previous works on text revision have focused on defining edit intention taxonomies within a single domain or developing computational models with a single level of edit granularity, such as sentence-level edits, which differ from human`s revision cycles. This work describes IteraTeR: the first large-scale, multi-domain, edit-intention annotated corpus of iteratively revised text. In particular, IteraTeR is collected based on a new framework to comprehensively model the iterative text revisions that generalizes to a variety of domains, edit intentions, revision depths, and granularities. When we incorporate our annotated edit intentions, both generative and action-based text revision models significantly improve automatic evaluations. Through our work, we better understand the text revision process, making vital connections between edit intentions and writing quality, enabling the creation of diverse corpora to support computational modeling of iterative text revisions. | null | null | 10.18653/v1/2022.acl-long.250 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,916 |
inproceedings | ontanon-etal-2022-making | Making Transformers Solve Compositional Tasks | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.251/ | Ontanon, Santiago and Ainslie, Joshua and Fisher, Zachary and Cvicek, Vaclav | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3591--3607 | Several studies have reported the inability of Transformer models to generalize compositionally, a key type of generalization in many NLP tasks such as semantic parsing. In this paper we explore the design space of Transformer models showing that the inductive biases given to the model by several design decisions significantly impact compositional generalization. We identified Transformer configurations that generalize compositionally significantly better than previously reported in the literature in many compositional tasks. We achieve state-of-the-art results in a semantic parsing compositional generalization benchmark (COGS), and a string edit operation composition benchmark (PCFG). | null | null | 10.18653/v1/2022.acl-long.251 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,917 |
inproceedings | dankers-etal-2022-transformer | Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.252/ | Dankers, Verna and Lucas, Christopher and Titov, Ivan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3608--3626 | Unlike literal expressions, idioms' meanings do not directly follow from their parts, posing a challenge for neural machine translation (NMT). NMT models are often unable to translate idioms accurately and over-generate compositional, literal translations. In this work, we investigate whether the non-compositionality of idioms is reflected in the mechanics of the dominant NMT model, Transformer, by analysing the hidden states and attention patterns for models with English as source language and one of seven European languages as target language. When Transformer emits a non-literal translation - i.e. identifies the expression as idiomatic - the encoder processes idioms more strongly as single lexical units compared to literal expressions. This manifests in idioms' parts being grouped through attention and in reduced interaction between idioms and their context. In the decoder`s cross-attention, figurative inputs result in reduced attention on source-side tokens. These results suggest that Transformer`s tendency to process idioms as compositional expressions contributes to literal translations of idioms. | null | null | 10.18653/v1/2022.acl-long.252 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,918 |
inproceedings | sun-etal-2022-conditionalqa | {C}onditional{QA}: A Complex Reading Comprehension Dataset with Conditional Answers | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.253/ | Sun, Haitian and Cohen, William and Salakhutdinov, Ruslan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3627--3637 | We describe a Question Answering (QA) dataset that contains complex questions with conditional answers, i.e. the answers are only applicable when certain conditions apply. We call this dataset ConditionalQA. In addition to conditional answers, the dataset also features:(1) long context documents with information that is related in logically complex ways;(2) multi-hop questions that require compositional logical reasoning;(3) a combination of extractive questions, yes/no questions, questions with multiple answers, and not-answerable questions;(4) questions asked without knowing the answers. We show that ConditionalQA is challenging for many of the existing QA models, especially in selecting answer conditions. We believe that this dataset will motivate further research in answering complex questions over long documents. | null | null | 10.18653/v1/2022.acl-long.253 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,919 |
inproceedings | karimi-mahabadi-etal-2022-prompt | Prompt-free and Efficient Few-shot Learning with Language Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.254/ | Karimi Mahabadi, Rabeeh and Zettlemoyer, Luke and Henderson, James and Mathias, Lambert and Saeidi, Marzieh and Stoyanov, Veselin and Yazdani, Majid | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3638--3652 | Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score. In this work, we propose Perfect, a simple and efficient method for few-shot fine-tuning of PLMs without relying on any such handcrafting, which is highly effective given as few as 32 data points. Perfect makes two key design choices: First, we show that manually engineered task prompts can be replaced with task-specific adapters that enable sample-efficient fine-tuning and reduce memory and storage costs by roughly factors of 5 and 100, respectively. Second, instead of using handcrafted verbalizers, we learn new multi-token label embeddings during fine-tuning, which are not tied to the model vocabulary and which allow us to avoid complex auto-regressive decoding. These embeddings are not only learnable from limited data but also enable nearly 100x faster training and inference. Experiments on a wide range of few shot NLP tasks demonstrate that Perfect, while being simple and efficient, also outperforms existing state-of-the-art few-shot learning methods. Our code is publicly available at \url{https://github.com/rabeehk/perfect}. | null | null | 10.18653/v1/2022.acl-long.254 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,920 |
inproceedings | zhang-etal-2022-continual | Continual Sequence Generation with Adaptive Compositional Modules | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.255/ | Zhang, Yanzhe and Wang, Xuezhi and Yang, Diyi | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3653--3667 | Continual learning is essential for real-world deployment when there is a need to quickly adapt the model to new tasks without forgetting knowledge of old tasks. Existing work on continual sequence generation either always reuses existing parameters to learn new tasks, which is vulnerable to catastrophic forgetting on dissimilar tasks, or blindly adds new parameters for every new task, which could prevent knowledge sharing between similar tasks. To get the best of both worlds, in this work, we propose continual sequence generation with adaptive compositional modules to adaptively add modules in transformer architectures and compose both old and new modules for new tasks. We also incorporate pseudo experience replay to facilitate knowledge transfer in those shared modules. Experiment results on various sequences of generation tasks show that our framework can adaptively add modules or reuse modules based on task similarity, outperforming state-of-the-art baselines in terms of both performance and parameter efficiency. We make our code public at \url{https://github.com/GT-SALT/Adaptive-Compositional-Modules}. | null | null | 10.18653/v1/2022.acl-long.255 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,921 |
inproceedings | joshi-he-2022-investigation | An Investigation of the (In)effectiveness of Counterfactually Augmented Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.256/ | Joshi, Nitish and He, He | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3668--3681 | While pretrained language models achieve excellent performance on natural language understanding benchmarks, they tend to rely on spurious correlations and generalize poorly to out-of-distribution (OOD) data. Recent work has explored using counterfactually-augmented data (CAD){---}data generated by minimally perturbing examples to flip the ground-truth label{---}to identify robust features that are invariant under distribution shift. However, empirical results using CAD during training for OOD generalization have been mixed. To explain this discrepancy, through a toy theoretical example and empirical analysis on two crowdsourced CAD datasets, we show that: (a) while features perturbed in CAD are indeed robust features, it may prevent the model from learning unperturbed robust features; and (b) CAD may exacerbate existing spurious correlations in the data. Our results thus show that the lack of perturbation diversity limits CAD`s effectiveness on OOD generalization, calling for innovative crowdsourcing procedures to elicit diverse perturbation of examples. | null | null | 10.18653/v1/2022.acl-long.256 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,922 |
inproceedings | ziems-etal-2022-inducing | Inducing Positive Perspectives with Text Reframing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.257/ | Ziems, Caleb and Li, Minzhi and Zhang, Anthony and Yang, Diyi | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3682--3700 | Sentiment transfer is one popular example of a text style transfer task, where the goal is to reverse the sentiment polarity of a text. With a sentiment reversal comes also a reversal in meaning. We introduce a different but related task called positive reframing in which we neutralize a negative point of view and generate a more positive perspective for the author without contradicting the original meaning. Our insistence on meaning preservation makes positive reframing a challenging and semantically rich task. To facilitate rapid progress, we introduce a large-scale benchmark, Positive Psychology Frames, with 8,349 sentence pairs and 12,755 structured annotations to explain positive reframing in terms of six theoretically-motivated reframing strategies. Then we evaluate a set of state-of-the-art text style transfer models, and conclude by discussing key challenges and directions for future work. | null | null | 10.18653/v1/2022.acl-long.257 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,923 |
inproceedings | ziems-etal-2022-value | {VALUE}: {U}nderstanding Dialect Disparity in {NLU} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.258/ | Ziems, Caleb and Chen, Jiaao and Harris, Camille and Anderson, Jessica and Yang, Diyi | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3701--3720 | English Natural Language Understanding (NLU) systems have achieved great performances and even outperformed humans on benchmarks like GLUE and SuperGLUE. However, these benchmarks contain only textbook Standard American English (SAE). Other dialects have been largely overlooked in the NLP community. This leads to biased and inequitable NLU systems that serve only a sub-population of speakers. To understand disparities in current models and to facilitate more dialect-competent NLU systems, we introduce the VernAcular Language Understanding Evaluation (VALUE) benchmark, a challenging variant of GLUE that we created with a set of lexical and morphosyntactic transformation rules. In this initial release (V.1), we construct rules for 11 features of African American Vernacular English (AAVE), and we recruit fluent AAVE speakers to validate each feature transformation via linguistic acceptability judgments in a participatory design manner. Experiments show that these new dialectal features can lead to a drop in model performance. | null | null | 10.18653/v1/2022.acl-long.258 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,924 |
inproceedings | pavlopoulos-etal-2022-detection | From the Detection of Toxic Spans in Online Discussions to the Analysis of Toxic-to-Civil Transfer | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.259/ | Pavlopoulos, John and Laugier, Leo and Xenos, Alexandros and Sorensen, Jeffrey and Androutsopoulos, Ion | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3721--3734 | We study the task of toxic spans detection, which concerns the detection of the spans that make a text toxic, when detecting such spans is possible. We introduce a dataset for this task, ToxicSpans, which we release publicly. By experimenting with several methods, we show that sequence labeling models perform best, but methods that add generic rationale extraction mechanisms on top of classifiers trained to predict if a post is toxic or not are also surprisingly promising. Finally, we use ToxicSpans and systems trained on it, to provide further analysis of state-of-the-art toxic to non-toxic transfer systems, as well as of human performance on that latter task. Our work highlights challenges in finer toxicity detection and mitigation. | null | null | 10.18653/v1/2022.acl-long.259 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,925 |
inproceedings | lee-etal-2022-formnet | {F}orm{N}et: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.260/ | Lee, Chen-Yu and Li, Chun-Liang and Dozat, Timothy and Perot, Vincent and Su, Guolong and Hua, Nan and Ainslie, Joshua and Wang, Renshen and Fujii, Yasuhisa and Pfister, Tomas | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3735--3754 | Sequence modeling has demonstrated state-of-the-art performance on natural language and document understanding tasks. However, it is challenging to correctly serialize tokens in form-like documents in practice due to their variety of layout patterns. We propose FormNet, a structure-aware sequence model to mitigate the suboptimal serialization of forms. First, we design Rich Attention that leverages the spatial relationship between tokens in a form for more precise attention score calculation. Second, we construct Super-Tokens for each word by embedding representations from their neighboring tokens through graph convolutions. FormNet therefore explicitly recovers local syntactic information that may have been lost during serialization. In experiments, FormNet outperforms existing methods with a more compact model size and less pre-training data, establishing new state-of-the-art performance on CORD, FUNSD and Payment benchmarks. | null | null | 10.18653/v1/2022.acl-long.260 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,926 |
inproceedings | ziems-etal-2022-moral | The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.261/ | Ziems, Caleb and Yu, Jane and Wang, Yi-Chia and Halevy, Alon and Yang, Diyi | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3755--3773 | Conversational agents have come increasingly closer to human competence in open-domain dialogue settings; however, such models can reflect insensitive, hurtful, or entirely incoherent viewpoints that erode a user`s trust in the moral integrity of the system. Moral deviations are difficult to mitigate because moral judgments are not universal, and there may be multiple competing judgments that apply to a situation simultaneously. In this work, we introduce a new resource, not to authoritatively resolve moral ambiguities, but instead to facilitate systematic understanding of the intuitions, values and moral judgments reflected in the utterances of dialogue systems. The Moral Integrity Corpus, MIC, is such a resource, which captures the moral assumptions of 38k prompt-reply pairs, using 99k distinct Rules of Thumb (RoTs). Each RoT reflects a particular moral conviction that can explain why a chatbot`s reply may appear acceptable or problematic. We further organize RoTs with a set of 9 moral and social attributes and benchmark performance for attribute classification. Most importantly, we show that current neural language models can automatically generate new RoTs that reasonably describe previously unseen interactions, but they still struggle with certain scenarios. Our findings suggest that MIC will be a useful resource for understanding and language models' implicit moral assumptions and flexibly benchmarking the integrity of conversational agents. To download the data, see \url{https://github.com/GT-SALT/mic} | null | null | 10.18653/v1/2022.acl-long.261 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,927 |
inproceedings | hou-etal-2022-token | Token Dropping for Efficient {BERT} Pretraining | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.262/ | Hou, Le and Pang, Richard Yuanzhe and Zhou, Tianyi and Wu, Yuexin and Song, Xinying and Song, Xiaodan and Zhou, Denny | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3774--3784 | Transformer-based models generally allocate the same amount of computation for each token in a given sequence. We develop a simple but effective {\textquotedblleft}token dropping{\textquotedblright} method to accelerate the pretraining of transformer models, such as BERT, without degrading its performance on downstream tasks. In particular, we drop unimportant tokens starting from an intermediate layer in the model to make the model focus on important tokens more efficiently if with limited computational resource. The dropped tokens are later picked up by the last layer of the model so that the model still produces full-length sequences. We leverage the already built-in masked language modeling (MLM) loss to identify unimportant tokens with practically no computational overhead. In our experiments, this simple approach reduces the pretraining cost of BERT by 25{\%} while achieving similar overall fine-tuning performance on standard downstream tasks. | null | null | 10.18653/v1/2022.acl-long.262 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,928 |
inproceedings | gupta-etal-2022-dialfact | {D}ial{F}act: A Benchmark for Fact-Checking in Dialogue | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.263/ | Gupta, Prakhar and Wu, Chien-Sheng and Liu, Wenhao and Xiong, Caiming | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3785--3801 | Fact-checking is an essential tool to mitigate the spread of misinformation and disinformation. We introduce the task of fact-checking in dialogue, which is a relatively unexplored area. We construct DialFact, a testing benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia. There are three sub-tasks in DialFact: 1) Verifiable claim detection task distinguishes whether a response carries verifiable factual information; 2) Evidence retrieval task retrieves the most relevant Wikipedia snippets as evidence; 3) Claim verification task predicts a dialogue response to be supported, refuted, or not enough information. We found that existing fact-checking models trained on non-dialogue data like FEVER fail to perform well on our task, and thus, we propose a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue. We point out unique challenges in DialFact such as handling the colloquialisms, coreferences, and retrieval ambiguities in the error analysis to shed light on future research in this direction. | null | null | 10.18653/v1/2022.acl-long.263 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,929 |
inproceedings | grangier-iter-2022-trade | The Trade-offs of Domain Adaptation for Neural Language Models | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.264/ | Grangier, David and Iter, Dan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3802--3813 | This work connects language model adaptation with concepts of machine learning theory. We consider a training setup with a large out-of-domain set and a small in-domain set. We derive how the benefit of training a model on either set depends on the size of the sets and the distance between their underlying distributions. We analyze how out-of-domain pre-training before in-domain fine-tuning achieves better generalization than either solution independently. Finally, we present how adaptation techniques based on data selection, such as importance sampling, intelligent data selection and influence functions, can be presented in a common framework which highlights their similarity and also their subtle differences. | null | null | 10.18653/v1/2022.acl-long.264 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,930 |
inproceedings | adebara-abdul-mageed-2022-towards | Towards Afrocentric {NLP} for {A}frican Languages: Where We Are and Where We Can Go | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.265/ | Adebara, Ife and Abdul-Mageed, Muhammad | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3814--3841 | Aligning with ACL 2022 special Theme on {\textquotedblleft}Language Diversity: from Low Resource to Endangered Languages{\textquotedblright}, we discuss the major linguistic and sociopolitical challenges facing development of NLP technologies for African languages. Situating African languages in a typological framework, we discuss how the particulars of these languages can be harnessed. To facilitate future research, we also highlight current efforts, communities, venues, datasets, and tools. Our main objective is to motivate and advocate for an Afrocentric approach to technology development. With this in mind, we recommend \textit{what} technologies to build and \textit{how} to build, evaluate, and deploy them based on the needs of local African communities. | null | null | 10.18653/v1/2022.acl-long.265 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,931 |
inproceedings | tarnavskyi-etal-2022-ensembling | Ensembling and Knowledge Distilling of Large Sequence Taggers for Grammatical Error Correction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.266/ | Tarnavskyi, Maksym and Chernodub, Artem and Omelianchuk, Kostiantyn | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3842--3852 | In this paper, we investigate improvements to the GEC sequence tagging architecture with a focus on ensembling of recent cutting-edge Transformer-based encoders in Large configurations. We encourage ensembling models by majority votes on span-level edits because this approach is tolerant to the model architecture and vocabulary size. Our best ensemble achieves a new SOTA result with an $F_{0.5}$ score of 76.05 on BEA-2019 (test), even without pre-training on synthetic datasets. In addition, we perform knowledge distillation with a trained ensemble to generate new synthetic training datasets, {\textquotedblleft}Troy-Blogs{\textquotedblright} and {\textquotedblleft}Troy-1BW{\textquotedblright}. Our best single sequence tagging model that is pretrained on the generated Troy- datasets in combination with the publicly available synthetic PIE dataset achieves a near-SOTA result with an $F_{0.5}$ score of 73.21 on BEA-2019 (test). The code, datasets, and trained models are publicly available. | null | null | 10.18653/v1/2022.acl-long.266 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,932 |
inproceedings | ostapenko-etal-2022-speaker | Speaker Information Can Guide Models to Better Inductive Biases: A Case Study On Predicting Code-Switching | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.267/ | Ostapenko, Alissa and Wintner, Shuly and Fricke, Melinda and Tsvetkov, Yulia | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3853--3867 | Natural language processing (NLP) models trained on people-generated data can be unreliable because, without any constraints, they can learn from spurious correlations that are not relevant to the task. We hypothesize that enriching models with speaker information in a controlled, educated way can guide them to pick up on relevant inductive biases. For the speaker-driven task of predicting code-switching points in English{--}Spanish bilingual dialogues, we show that adding sociolinguistically-grounded speaker features as prepended prompts significantly improves accuracy. We find that by adding influential phrases to the input, speaker-informed models learn useful and explainable linguistic information. To our knowledge, we are the first to incorporate speaker characteristics in a neural model for code-switching, and more generally, take a step towards developing transparent, personalized models that use speaker information in a controlled way. | null | null | 10.18653/v1/2022.acl-long.267 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,933 |
inproceedings | alvarez-mellado-lignos-2022-detecting | Detecting Unassimilated Borrowings in {S}panish: {A}n Annotated Corpus and Approaches to Modeling | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.268/ | {\'A}lvarez-Mellado, Elena and Lignos, Constantine | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3868--3888 | This work presents a new resource for borrowing identification and analyzes the performance and errors of several models on this task. We introduce a new annotated corpus of Spanish newswire rich in unassimilated lexical borrowings{---}words from one language that are introduced into another without orthographic adaptation{---}and use it to evaluate how several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) perform. The corpus contains 370,000 tokens and is larger, more borrowing-dense, OOV-rich, and topic-varied than previous corpora available for this task. Our results show that a BiLSTM-CRF model fed with subword embeddings along with either Transformer-based embeddings pretrained on codeswitched data or a combination of contextualized word embeddings outperforms results obtained by a multilingual BERT-based model. | null | null | 10.18653/v1/2022.acl-long.268 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,934 |
inproceedings | bibal-etal-2022-attention | Is Attention Explanation? An Introduction to the Debate | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.269/ | Bibal, Adrien and Cardon, R{\'e}mi and Alfter, David and Wilkens, Rodrigo and Wang, Xiaoou and Fran{\c{c}}ois, Thomas and Watrin, Patrick | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3889--3900 | The performance of deep learning models in NLP and other fields of machine learning has led to a rise in their popularity, and so the need for explanations of these models becomes paramount. Attention has been seen as a solution to increase performance, while providing some explanations. However, a debate has started to cast doubt on the explanatory power of attention in neural networks. Although the debate has created a vast literature thanks to contributions from various areas, the lack of communication is becoming more and more tangible. In this paper, we provide a clear overview of the insights on the debate by critically confronting works from these different areas. This holistic vision can be of great interest for future works in all the communities concerned by this debate. We sum up the main challenges spotted in these areas, and we conclude by discussing the most promising future avenues on attention as an explanation. | null | null | 10.18653/v1/2022.acl-long.269 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,935 |
inproceedings | fu-etal-2022-thousand | There Are a Thousand Hamlets in a Thousand People`s Eyes: Enhancing Knowledge-grounded Dialogue with Personal Memory | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.270/ | Fu, Tingchen and Zhao, Xueliang and Tao, Chongyang and Wen, Ji-Rong and Yan, Rui | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3901--3913 | Knowledge-grounded conversation (KGC) shows great potential in building an engaging and knowledgeable chatbot, and knowledge selection is a key ingredient in it. However, previous methods for knowledge selection only concentrate on the relevance between knowledge and dialogue context, ignoring the fact that age, hobby, education and life experience of an interlocutor have a major effect on his or her personal preference over external knowledge. Without taking the personalization issue into account, it is difficult for existing dialogue systems to select the proper knowledge and generate persona-consistent responses. In this work, we introduce personal memory into knowledge selection in KGC to address the personalization issue. We propose a variational method to model the underlying relationship between one`s personal memory and his or her selection of knowledge, and devise a learning scheme in which the forward mapping from personal memory to knowledge and its inverse mapping is included in a closed loop so that they could teach each other. Experiment results show that our methods outperform existing KGC methods significantly on both automatic evaluation and human evaluation. | null | null | 10.18653/v1/2022.acl-long.270 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,936 |
inproceedings | kasner-dusek-2022-neural | Neural Pipeline for Zero-Shot Data-to-Text Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.271/ | Kasner, Zden{\v{e}}k and Dusek, Ondrej | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3914--3932 | In data-to-text (D2T) generation, training on in-domain data leads to overfitting to the data representation and repeating training data noise. We examine how to avoid finetuning pretrained language models (PLMs) on D2T generation datasets while still taking advantage of surface realization capabilities of PLMs. Inspired by pipeline approaches, we propose to generate text by transforming single-item descriptions with a sequence of modules trained on general-domain text-based operations: ordering, aggregation, and paragraph compression. We train PLMs for performing these operations on a synthetic corpus WikiFluent which we build from English Wikipedia. Our experiments on two major triple-to-text datasets{---}WebNLG and E2E{---}show that our approach enables D2T generation from RDF triples in zero-shot settings. | null | null | 10.18653/v1/2022.acl-long.271 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,937 |
inproceedings | liu-etal-2022-always | Not always about you: Prioritizing community needs when developing endangered language technology | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.272/ | Liu, Zoey and Richardson, Crystal and Hatcher, Richard and Prud{'}hommeaux, Emily | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3933--3944 | Languages are classified as low-resource when they lack the quantity of data necessary for training statistical and machine learning tools and models. Causes of resource scarcity vary but can include poor access to technology for developing these resources, a relatively small population of speakers, or a lack of urgency for collecting such resources in bilingual populations where the second language is high-resource. As a result, the languages described as low-resource in the literature are as different as Finnish on the one hand, with millions of speakers using it in every imaginable domain, and Seneca, with only a small-handful of fluent speakers using the language primarily in a restricted domain. While issues stemming from the lack of resources necessary to train models unite this disparate group of languages, many other issues cut across the divide between widely-spoken low-resource languages and endangered languages. In this position paper, we discuss the unique technological, cultural, practical, and ethical challenges that researchers and indigenous speech community members face when working together to develop language technology to support endangered language documentation and revitalization. We report the perspectives of language teachers, Master Speakers and elders from indigenous communities, as well as the point of view of academics. We describe an ongoing fruitful collaboration and make recommendations for future partnerships between academic researchers and language community stakeholders. | null | null | 10.18653/v1/2022.acl-long.272 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,938 |
inproceedings | jin-etal-2022-automatic | Automatic Identification and Classification of Bragging in Social Media | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.273/ | Jin, Mali and Preotiuc-Pietro, Daniel and Do{\u{gru{\"oz, A. Seza and Aletras, Nikolaos | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3945--3959 | Bragging is a speech act employed with the goal of constructing a favorable self-image through positive statements about oneself. It is widespread in daily communication and especially popular in social media, where users aim to build a positive image of their persona directly or indirectly. In this paper, we present the first large scale study of bragging in computational linguistics, building on previous research in linguistics and pragmatics. To facilitate this, we introduce a new publicly available data set of tweets annotated for bragging and their types. We empirically evaluate different transformer-based models injected with linguistic information in (a) binary bragging classification, i.e., if tweets contain bragging statements or not; and (b) multi-class bragging type prediction including not bragging. Our results show that our models can predict bragging with macro F1 up to 72.42 and 35.95 in the binary and multi-class classification tasks respectively. Finally, we present an extensive linguistic and error analysis of bragging prediction to guide future research on this topic. | null | null | 10.18653/v1/2022.acl-long.273 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,939 |
inproceedings | das-etal-2022-automatic | Automatic Error Analysis for Document-level Information Extraction | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.274/ | Das, Aliva and Du, Xinya and Wang, Barry and Shi, Kejian and Gu, Jiayuan and Porter, Thomas and Cardie, Claire | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3960--3975 | Document-level information extraction (IE) tasks have recently begun to be revisited in earnest using the end-to-end neural network techniques that have been successful on their sentence-level IE counterparts. Evaluation of the approaches, however, has been limited in a number of dimensions. In particular, the precision/recall/F1 scores typically reported provide few insights on the range of errors the models make. We build on the work of Kummerfeld and Klein (2013) to propose a transformation-based framework for automating error analysis in document-level event and (N-ary) relation extraction. We employ our framework to compare two state-of-the-art document-level template-filling approaches on datasets from three domains; and then, to gauge progress in IE since its inception 30 years ago, vs. four systems from the MUC-4 (1992) evaluation. | null | null | 10.18653/v1/2022.acl-long.274 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,940 |
inproceedings | liu-emerson-2022-learning | Learning Functional Distributional Semantics with Visual Data | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.275/ | Liu, Yinhong and Emerson, Guy | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3976--3988 | Functional Distributional Semantics is a recently proposed framework for learning distributional semantics that provides linguistic interpretability. It models the meaning of a word as a binary classifier rather than a numerical vector. In this work, we propose a method to train a Functional Distributional Semantics model with grounded visual data. We train it on the Visual Genome dataset, which is closer to the kind of data encountered in human language acquisition than a large text corpus. On four external evaluation datasets, our model outperforms previous work on learning semantics from Visual Genome. | null | null | 10.18653/v1/2022.acl-long.275 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,941 |
inproceedings | ghosh-srivastava-2022-epic | e{P}i{C}: Employing Proverbs in Context as a Benchmark for Abstract Language Understanding | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.276/ | Ghosh, Sayan and Srivastava, Shashank | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 3989--4004 | While large language models have shown exciting progress on several NLP benchmarks, evaluating their ability for complex analogical reasoning remains under-explored. Here, we introduce a high-quality crowdsourced dataset of narratives for employing proverbs in context as a benchmark for abstract language understanding. The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and contains minimal lexical overlaps between narratives and proverbs, ensuring that models need to go beyond surface-level reasoning to succeed. We explore three tasks: (1) proverb recommendation and alignment prediction, (2) narrative generation for a given proverb and topic, and (3) identifying narratives with similar motifs. Our experiments show that neural language models struggle on these tasks compared to humans, and these tasks pose multiple learning challenges. | null | null | 10.18653/v1/2022.acl-long.276 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,942 |
inproceedings | kantharaj-etal-2022-chart | Chart-to-Text: A Large-Scale Benchmark for Chart Summarization | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.277/ | Kantharaj, Shankar and Leong, Rixie Tiffany and Lin, Xiang and Masry, Ahmed and Thakkar, Megh and Hoque, Enamul and Joty, Shafiq | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4005--4023 | Charts are commonly used for exploring data and communicating insights. Generating natural language summaries from charts can be very helpful for people in inferring key insights that would otherwise require a lot of cognitive and perceptual efforts. We present Chart-to-text, a large-scale benchmark with two datasets and a total of 44,096 charts covering a wide range of topics and chart types. We explain the dataset construction process and analyze the datasets. We also introduce a number of state-of-the-art neural models as baselines that utilize image captioning and data-to-text generation techniques to tackle two problem variations: one assumes the underlying data table of the chart is available while the other needs to extract data from chart images. Our analysis with automatic and human evaluation shows that while our best models usually generate fluent summaries and yield reasonable BLEU scores, they also suffer from hallucinations and factual errors as well as difficulties in correctly explaining complex patterns and trends in charts. | null | null | 10.18653/v1/2022.acl-long.277 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,943 |
inproceedings | socolof-etal-2022-characterizing | Characterizing Idioms: Conventionality and Contingency | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.278/ | Socolof, Michaela and Cheung, Jackie and Wagner, Michael and O{'}Donnell, Timothy | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4024--4037 | Idioms are unlike most phrases in two important ways. First, words in an idiom have non-canonical meanings. Second, the non-canonical meanings of words in an idiom are contingent on the presence of other words in the idiom. Linguistic theories differ on whether these properties depend on one another, as well as whether special theoretical machinery is needed to accommodate idioms. We define two measures that correspond to the properties above, and we show that idioms fall at the expected intersection of the two dimensions, but that the dimensions themselves are not correlated. Our results suggest that introducing special machinery to handle idioms may not be warranted. | null | null | 10.18653/v1/2022.acl-long.278 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,944 |
inproceedings | galke-scherp-2022-bag | Bag-of-Words vs. Graph vs. Sequence in Text Classification: Questioning the Necessity of Text-Graphs and the Surprising Strength of a Wide {MLP} | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.279/ | Galke, Lukas and Scherp, Ansgar | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4038--4051 | Graph neural networks have triggered a resurgence of graph-based text classification methods, defining today`s state of the art. We show that a wide multi-layer perceptron (MLP) using a Bag-of-Words (BoW) outperforms the recent graph-based models TextGCN and HeteGCN in an inductive text classification setting and is comparable with HyperGAT. Moreover, we fine-tune a sequence-based BERT and a lightweight DistilBERT model, which both outperform all state-of-the-art models. These results question the importance of synthetic graphs used in modern text classifiers. In terms of efficiency, DistilBERT is still twice as large as our BoW-based wide MLP, while graph-based models like TextGCN require setting up an $\mathcal{O}(N^2)$ graph, where $N$ is the vocabulary plus corpus size. Finally, since Transformers need to compute $\mathcal{O}(L^2)$ attention weights with sequence length $L$, the MLP models show higher training and inference speeds on datasets with long sequences. | null | null | 10.18653/v1/2022.acl-long.279 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,945 |
inproceedings | weston-etal-2022-generative | Generative Pretraining for Paraphrase Evaluation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.280/ | Weston, Jack and Lenain, Raphael and Meepegama, Udeepa and Fristed, Emil | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4052--4073 | We introduce ParaBLEU, a paraphrase representation learning model and evaluation metric for text generation. Unlike previous approaches, ParaBLEU learns to understand paraphrasis using generative conditioning as a pretraining objective. ParaBLEU correlates more strongly with human judgements than existing metrics, obtaining new state-of-the-art results on the 2017 WMT Metrics Shared Task. We show that our model is robust to data scarcity, exceeding previous state-of-the-art performance using only 50{\%} of the available training data and surpassing BLEU, ROUGE and METEOR with only 40 labelled examples. Finally, we demonstrate that ParaBLEU can be used to conditionally generate novel paraphrases from a single demonstration, which we use to confirm our hypothesis that it learns abstract, generalized paraphrase representations. | null | null | 10.18653/v1/2022.acl-long.280 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,946 |
inproceedings | conforti-etal-2022-incorporating | Incorporating Stock Market Signals for {T}witter Stance Detection | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.281/ | Conforti, Costanza and Berndt, Jakob and Pilehvar, Mohammad Taher and Giannitsarou, Chryssi and Toxvaerd, Flavio and Collier, Nigel | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4074--4091 | Research in stance detection has so far focused on models which leverage purely textual input. In this paper, we investigate the integration of textual and financial signals for stance detection in the financial domain. Specifically, we propose a robust multi-task neural architecture that combines textual input with high-frequency intra-day time series from stock market prices. Moreover, we extend wt{--}wt, an existing stance detection dataset which collects tweets discussing Mergers and Acquisitions operations, with the relevant financial signal. Importantly, the obtained dataset aligns with Stander, an existing news stance detection dataset, thus resulting in a unique multimodal, multi-genre stance detection resource. We show experimentally and through detailed result analysis that our stance detection system benefits from financial information, and achieves state-of-the-art results on the wt{--}wt dataset: this demonstrates that the combination of multiple input signals is effective for cross-target stance detection, and opens interesting research directions for future work. | null | null | 10.18653/v1/2022.acl-long.281 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,947 |
inproceedings | cheng-etal-2022-multilingual | Multilingual Mix: Example Interpolation Improves Multilingual Neural Machine Translation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.282/ | Cheng, Yong and Bapna, Ankur and Firat, Orhan and Cao, Yuan and Wang, Pidong and Macherey, Wolfgang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4092--4102 | Multilingual neural machine translation models are trained to maximize the likelihood of a mix of examples drawn from multiple language pairs. The dominant inductive bias applied to these models is a shared vocabulary and a shared set of parameters across languages; the inputs and labels corresponding to examples drawn from different language pairs might still reside in distinct sub-spaces. In this paper, we introduce multilingual crossover encoder-decoder (mXEncDec) to fuse language pairs at an instance level. Our approach interpolates instances from different language pairs into joint {\textquoteleft}crossover examples' in order to encourage sharing input and output spaces across languages. To ensure better fusion of examples in multilingual settings, we propose several techniques to improve example interpolation across dissimilar languages under heavy data imbalance. Experiments on a large-scale WMT multilingual dataset demonstrate that our approach significantly improves quality on English-to-Many, Many-to-English and zero-shot translation tasks (from +0.5 BLEU up to +5.5 BLEU points). Results on code-switching sets demonstrate the capability of our approach to improve model generalization to out-of-distribution multilingual examples. We also conduct qualitative and quantitative representation comparisons to analyze the advantages of our approach at the representation level. | null | null | 10.18653/v1/2022.acl-long.282 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,948 |
inproceedings | alhama-2022-word | Word Segmentation as Unsupervised Constituency Parsing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.283/ | Alhama, Raquel G. | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4103--4112 | Word identification from continuous input is typically viewed as a segmentation task. Experiments with human adults suggest that familiarity with syntactic structures in their native language also influences word identification in artificial languages; however, the relation between syntactic processing and word identification is yet unclear. This work takes one step forward by exploring a radically different approach of word identification, in which segmentation of a continuous input is viewed as a process isomorphic to unsupervised constituency parsing. Besides formalizing the approach, this study reports simulations of human experiments with DIORA (Drozdov et al., 2020), a neural unsupervised constituency parser. Results show that this model can reproduce human behavior in word identification experiments, suggesting that this is a viable approach to study word identification and its relation to syntactic processing. | null | null | 10.18653/v1/2022.acl-long.283 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,949 |
inproceedings | dinan-etal-2022-safetykit | {S}afety{K}it: First Aid for Measuring Safety in Open-domain Conversational Systems | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.284/ | Dinan, Emily and Abercrombie, Gavin and Bergman, A. and Spruit, Shannon and Hovy, Dirk and Boureau, Y-Lan and Rieser, Verena | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4113--4133 | The social impact of natural language processing and its applications has received increasing attention. In this position paper, we focus on the problem of safety for end-to-end conversational AI. We survey the problem landscape therein, introducing a taxonomy of three observed phenomena: the Instigator, Yea-Sayer, and Impostor effects. We then empirically assess the extent to which current tools can measure these effects and current systems display them. We release these tools as part of a {\textquotedblleft}first aid kit{\textquotedblright} (SafetyKit) to quickly assess apparent safety concerns. Our results show that, while current tools are able to provide an estimate of the relative safety of systems in various settings, they still have several shortcomings. We suggest several future directions and discuss ethical considerations. | null | null | 10.18653/v1/2022.acl-long.284 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,950 |
inproceedings | sherborne-lapata-2022-zero | Zero-Shot Cross-lingual Semantic Parsing | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.285/ | Sherborne, Tom and Lapata, Mirella | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4134--4153 | Recent work in cross-lingual semantic parsing has successfully applied machine translation to localize parsers to new languages. However, these advances assume access to high-quality machine translation systems and word alignment tools. We remove these assumptions and study cross-lingual semantic parsing as a zero-shot problem, without parallel data (i.e., utterance-logical form pairs) for new languages. We propose a multi-task encoder-decoder model to transfer parsing knowledge to additional languages using only English-logical form paired data and in-domain natural language corpora in each new language. Our model encourages language-agnostic encodings by jointly optimizing for logical-form generation with auxiliary objectives designed for cross-lingual latent representation alignment. Our parser performs significantly above translation-based baselines and, in some cases, competes with the supervised upper-bound. | null | null | 10.18653/v1/2022.acl-long.285 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,951 |
inproceedings | dankers-etal-2022-paradox | The Paradox of the Compositionality of Natural Language: A Neural Machine Translation Case Study | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.286/ | Dankers, Verna and Bruni, Elia and Hupkes, Dieuwke | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4154--4175 | Obtaining human-like performance in NLP is often argued to require compositional generalisation. Whether neural networks exhibit this ability is usually studied by training models on highly compositional synthetic data. However, compositionality in natural language is much more complex than the rigid, arithmetic-like version such data adheres to, and artificial compositionality tests thus do not allow us to determine how neural models deal with more realistic forms of compositionality. In this work, we re-instantiate three compositionality tests from the literature and reformulate them for neural machine translation (NMT).Our results highlight that: i) unfavourably, models trained on more data are more compositional; ii) models are sometimes less compositional than expected, but sometimes more, exemplifying that different levels of compositionality are required, and models are not always able to modulate between them correctly; iii) some of the non-compositional behaviours are mistakes, whereas others reflect the natural variation in data. Apart from an empirical study, our work is a call to action: we should rethink the evaluation of compositionality in neural networks and develop benchmarks using real data to evaluate compositionality on natural language, where composing meaning is not as straightforward as doing the math. | null | null | 10.18653/v1/2022.acl-long.286 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,952 |
inproceedings | zhang-etal-2022-multilingual | Multilingual Document-Level Translation Enables Zero-Shot Transfer From Sentences to Documents | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.287/ | Zhang, Biao and Bapna, Ankur and Johnson, Melvin and Dabirmoghaddam, Ali and Arivazhagan, Naveen and Firat, Orhan | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4176--4192 | Document-level neural machine translation (DocNMT) achieves coherent translations by incorporating cross-sentence context. However, for most language pairs there`s a shortage of parallel documents, although parallel sentences are readily available. In this paper, we study whether and how contextual modeling in DocNMT is transferable via multilingual modeling. We focus on the scenario of zero-shot transfer from teacher languages with document level data to student languages with no documents but sentence level data, and for the first time treat document-level translation as a transfer learning problem. Using simple concatenation-based DocNMT, we explore the effect of 3 factors on the transfer: the number of teacher languages with document level data, the balance between document and sentence level data at training, and the data condition of parallel documents (genuine vs. back-translated). Our experiments on Europarl-7 and IWSLT-10 show the feasibility of multilingual transfer for DocNMT, particularly on document-specific metrics. We observe that more teacher languages and adequate data balance both contribute to better transfer quality. Surprisingly, the transfer is less sensitive to the data condition, where multilingual DocNMT delivers decent performance with either back-translated or genuine document pairs. | null | null | 10.18653/v1/2022.acl-long.287 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,953 |
inproceedings | zheng-etal-2022-cross-lingual | Cross-Lingual Phrase Retrieval | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.288/ | Zheng, Heqi and Zhang, Xiao and Chi, Zewen and Huang, Heyan and Tan, Yan and Lan, Tian and Wei, Wei and Mao, Xian-Ling | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4193--4204 | Cross-lingual retrieval aims to retrieve relevant text across languages. Current methods typically achieve cross-lingual retrieval by learning language-agnostic text representations in word or sentence level. However, how to learn phrase representations for cross-lingual phrase retrieval is still an open problem. In this paper, we propose , a cross-lingual phrase retriever that extracts phrase representations from unlabeled example sentences. Moreover, we create a large-scale cross-lingual phrase retrieval dataset, which contains 65K bilingual phrase pairs and 4.2M example sentences in 8 English-centric language pairs. Experimental results show that outperforms state-of-the-art baselines which utilize word-level or sentence-level representations. also shows impressive zero-shot transferability that enables the model to perform retrieval in an unseen language pair during training. Our dataset, code, and trained models are publicly available at github.com/cwszz/XPR/. | null | null | 10.18653/v1/2022.acl-long.288 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,954 |
inproceedings | mehta-etal-2022-improving | Improving Compositional Generalization with Self-Training for Data-to-Text Generation | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.289/ | Mehta, Sanket Vaibhav and Rao, Jinfeng and Tay, Yi and Kale, Mihir and Parikh, Ankur and Strubell, Emma | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4205--4219 | Data-to-text generation focuses on generating fluent natural language responses from structured meaning representations (MRs). Such representations are compositional and it is costly to collect responses for all possible combinations of atomic meaning schemata, thereby necessitating few-shot generalization to novel MRs. In this work, we systematically study the compositional generalization of the state-of-the-art T5 models in few-shot data-to-text tasks. We show that T5 models fail to generalize to unseen MRs, and we propose a template-based input representation that considerably improves the model`s generalization capability. To further improve the model`s performance, we propose an approach based on self-training using fine-tuned BLEURT for pseudo-response selection. On the commonly-used SGD and Weather benchmarks, the proposed self-training approach improves tree accuracy by $46\%+$ and reduces the slot error rates by $73\%+$ over the strong T5 baselines in few-shot settings. | null | null | 10.18653/v1/2022.acl-long.289 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,955 |
inproceedings | li-etal-2022-mmcoqa | {MMC}o{QA}: Conversational Question Answering over Text, Tables, and Images | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.290/ | Li, Yongqi and Li, Wenjie and Nie, Liqiang | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4220--4231 | The rapid development of conversational assistants accelerates the study on conversational question answering (QA). However, the existing conversational QA systems usually answer users' questions with a single knowledge source, e.g., paragraphs or a knowledge graph, but overlook the important visual cues, let alone multiple knowledge sources of different modalities. In this paper, we hence define a novel research task, i.e., multimodal conversational question answering (MMCoQA), aiming to answer users' questions with multimodal knowledge sources via multi-turn conversations. This new task brings a series of research challenges, including but not limited to priority, consistency, and complementarity of multimodal knowledge. To facilitate the data-driven approaches in this area, we construct the first multimodal conversational QA dataset, named MMConvQA. Questions are fully annotated with not only natural language answers but also the corresponding evidence and valuable decontextualized self-contained questions. Meanwhile, we introduce an end-to-end baseline model, which divides this complex research task into question understanding, multi-modal evidence retrieval, and answer extraction. Moreover, we report a set of benchmarking results, and the results indicate that there is ample room for improvement. | null | null | 10.18653/v1/2022.acl-long.290 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,956 |
inproceedings | shi-etal-2022-effective | Effective Token Graph Modeling using a Novel Labeling Strategy for Structured Sentiment Analysis | Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.acl-long.291/ | Shi, Wenxuan and Li, Fei and Li, Jingye and Fei, Hao and Ji, Donghong | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 4232--4241 | The state-of-the-art model for structured sentiment analysis casts the task as a dependency parsing problem, which has some limitations: (1) The label proportions for span prediction and span relation prediction are imbalanced. (2) The span lengths of sentiment tuple components may be very large in this task, which will further exacerbates the imbalance problem. (3) Two nodes in a dependency graph cannot have multiple arcs, therefore some overlapped sentiment tuples cannot be recognized. In this work, we propose nichetargeting solutions for these issues. First, we introduce a novel labeling strategy, which contains two sets of token pair labels, namely essential label set and whole label set. The essential label set consists of the basic labels for this task, which are relatively balanced and applied in the prediction layer. The whole label set includes rich labels to help our model capture various token relations, which are applied in the hidden layer to softly influence our model. Moreover, we also propose an effective model to well collaborate with our labeling strategy, which is equipped with the graph attention networks to iteratively refine token representations, and the adaptive multi-label classifier to dynamically predict multiple relations between token pairs. We perform extensive experiments on 5 benchmark datasets in four languages. Experimental results show that our model outperforms previous SOTA models by a large margin. | null | null | 10.18653/v1/2022.acl-long.291 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 29,957 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.