entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
article | mickus-etal-2022-dissect | How to Dissect a {M}uppet: The Structure of Transformer Embedding Spaces | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.57/ | Mickus, Timothee and Paperno, Denis and Constant, Mathieu | null | 981--996 | Pretrained embeddings based on the Transformer architecture have taken the NLP community by storm. We show that they can mathematically be reframed as a sum of vector factors and showcase how to use this reframing to study the impact of each component. We provide evidence that multi-head attentions and feed-forwards are not equally useful in all downstream applications, as well as a quantitative overview of the effects of finetuning on the overall embedding space. This approach allows us to draw connections to a wide range of previous studies, from vector space anisotropy to attention weights. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00501 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,530 |
article | wiher-etal-2022-decoding | On Decoding Strategies for Neural Text Generators | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.58/ | Wiher, Gian and Meister, Clara and Cotterell, Ryan | null | 997--1012 | When generating text from probabilistic models, the chosen decoding strategy has a profound effect on the resulting text. Yet the properties elicited by various decoding strategies do not always transfer across natural language generation tasks. For example, while mode-seeking methods like beam search perform remarkably well for machine translation, they have been observed to lead to incoherent and repetitive text in story generation. Despite such observations, the effectiveness of decoding strategies is often assessed on only a single task. This work{---}in contrast{---}provides a comprehensive analysis of the interaction between language generation tasks and decoding strategies. Specifically, we measure changes in attributes of generated text as a function of both decoding strategy and task using human and automatic evaluation. Our results reveal both previously observed and novel findings. For example, the nature of the diversity{--}quality trade-off in language generation is very task-specific; the length bias often attributed to beam search is not constant across tasks. \url{https://github.com/gianwiher/decoding-NLG} | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00502 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,531 |
article | krishna-etal-2022-proofver | {P}roo{FV}er: Natural Logic Theorem Proving for Fact Verification | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.59/ | Krishna, Amrith and Riedel, Sebastian and Vlachos, Andreas | null | 1013--1030 | Fact verification systems typically rely on neural network classifiers for veracity prediction, which lack explainability. This paper proposes ProoFVer, which uses a seq2seq model to generate natural logic-based inferences as proofs. These proofs consist of lexical mutations between spans in the claim and the evidence retrieved, each marked with a natural logic operator. Claim veracity is determined solely based on the sequence of these operators. Hence, these proofs are faithful explanations, and this makes ProoFVer faithful by construction. Currently, ProoFVer has the highest label accuracy and the second best score in the FEVER leaderboard. Furthermore, it improves by 13.21{\%} points over the next best model on a dataset with counterfactual instances, demonstrating its robustness. As explanations, the proofs show better overlap with human rationales than attention-based highlights and the proofs help humans predict model decisions correctly more often than using the evidence directly.1 | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00503 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,532 |
article | sinclair-etal-2022-structural | Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.60/ | Sinclair, Arabella and Jumelet, Jaap and Zuidema, Willem and Fern{\'a}ndez, Raquel | null | 1031--1050 | We investigate the extent to which modern neural language models are susceptible to structural priming, the phenomenon whereby the structure of a sentence makes the same structure more probable in a follow-up sentence. We explore how priming can be used to study the potential of these models to learn abstract structural information, which is a prerequisite for good performance on tasks that require natural language understanding skills. We introduce a novel metric and release Prime-LM, a large corpus where we control for various linguistic factors that interact with priming strength. We find that Transformer models indeed show evidence of structural priming, but also that the generalizations they learned are to some extent modulated by semantic information. Our experiments also show that the representations acquired by the models may not only encode abstract sequential structure but involve certain level of hierarchical syntactic information. More generally, our study shows that the priming paradigm is a useful, additional tool for gaining insights into the capacities of language models and opens the door to future priming-based investigations that probe the model`s internal states.1 | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00504 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,533 |
article | algayres-etal-2022-dp | {DP}-Parse: Finding Word Boundaries from Raw Speech with an Instance Lexicon | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.61/ | Algayres, Robin and Ricoul, Tristan and Karadayi, Julien and Lauren{\c{c}}on, Hugo and Zaiem, Salah and Mohamed, Abdelrahman and Sagot, Beno{\^i}t and Dupoux, Emmanuel | null | 1051--1065 | Finding word boundaries in continuous speech is challenging as there is little or no equivalent of a {\textquoteleft}space' delimiter between words. Popular Bayesian non-parametric models for text segmentation (Goldwater et al., 2006, 2009) use a Dirichlet process to jointly segment sentences and build a lexicon of word types. We introduce DP-Parse, which uses similar principles but only relies on an instance lexicon of word tokens, avoiding the clustering errors that arise with a lexicon of word types. On the Zero Resource Speech Benchmark 2017, our model sets a new speech segmentation state-of-the-art in 5 languages. The algorithm monotonically improves with better input representations, achieving yet higher scores when fed with weakly supervised inputs. Despite lacking a type lexicon, DP-Parse can be pipelined to a language model and learn semantic and syntactic representations as assessed by a new spoken word embedding benchmark. 1 | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00505 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,534 |
article | dziri-etal-2022-evaluating | Evaluating Attribution in Dialogue Systems: The {BEGIN} Benchmark | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.62/ | Dziri, Nouha and Rashkin, Hannah and Linzen, Tal and Reitter, David | null | 1066--1083 | Knowledge-grounded dialogue systems powered by large language models often generate responses that, while fluent, are not attributable to a relevant source of information. Progress towards models that do not exhibit this issue requires evaluation metrics that can quantify its prevalence. To this end, we introduce the Benchmark for Evaluation of Grounded INteraction (Begin), comprising 12k dialogue turns generated by neural dialogue systems trained on three knowledge-grounded dialogue corpora. We collect human annotations assessing the extent to which the models' responses can be attributed to the given background information. We then use Begin to analyze eight evaluation metrics. We find that these metrics rely on spurious correlations, do not reliably distinguish attributable abstractive responses from unattributable ones, and perform substantially worse when the knowledge source is longer. Our findings underscore the need for more sophisticated and robust evaluation metrics for knowledge-grounded dialogue. We make Begin publicly available at \url{https://github.com/google/BEGIN-dataset}. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00506 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,535 |
article | sicilia-etal-2022-modeling | Modeling Non-Cooperative Dialogue: Theoretical and Empirical Insights | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.63/ | Sicilia, Anthony and Maidment, Tristan and Healy, Pat and Alikhani, Malihe | null | 1084--1102 | Investigating cooperativity of interlocutors is central in studying pragmatics of dialogue. Models of conversation that only assume cooperative agents fail to explain the dynamics of strategic conversations. Thus, we investigate the ability of agents to identify non-cooperative interlocutors while completing a concurrent visual-dialogue task. Within this novel setting, we study the optimality of communication strategies for achieving this multi-task objective. We use the tools of learning theory to develop a theoretical model for identifying non-cooperative interlocutors and apply this theory to analyze different communication strategies. We also introduce a corpus of non-cooperative conversations about images in the GuessWhat?! dataset proposed by De Vries et al. (2017). We use reinforcement learning to implement multiple communication strategies in this context and find that empirical results validate our theory. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00507 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,536 |
article | thayaparan-etal-2022-diff | Diff-Explainer: Differentiable Convex Optimization for Explainable Multi-hop Inference | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.64/ | Thayaparan, Mokanarangan and Valentino, Marco and Ferreira, Deborah and Rozanova, Julia and Freitas, Andr{\'e} | null | 1103--1119 | This paper presents Diff-Explainer, the first hybrid framework for explainable multi-hop inference that integrates explicit constraints with neural architectures through differentiable convex optimization. Specifically, Diff- Explainer allows for the fine-tuning of neural representations within a constrained optimization framework to answer and explain multi-hop questions in natural language. To demonstrate the efficacy of the hybrid framework, we combine existing ILP-based solvers for multi-hop Question Answering (QA) with Transformer-based representations. An extensive empirical evaluation on scientific and commonsense QA tasks demonstrates that the integration of explicit constraints in a end-to-end differentiable framework can significantly improve the performance of non- differentiable ILP solvers (8.91{\%}{--}13.3{\%}). Moreover, additional analysis reveals that Diff-Explainer is able to achieve strong performance when compared to standalone Transformers and previous multi-hop approaches while still providing structured explanations in support of its predictions. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00508 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,537 |
article | zeng-bhat-2022-getting | Getting {BART} to Ride the Idiomatic Train: Learning to Represent Idiomatic Expressions | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.65/ | Zeng, Ziheng and Bhat, Suma | null | 1120--1137 | Idiomatic expressions (IEs), characterized by their non-compositionality, are an important part of natural language. They have been a classical challenge to NLP, including pre-trained language models that drive today`s state-of-the-art. Prior work has identified deficiencies in their contextualized representation stemming from the underlying compositional paradigm of representation. In this work, we take a first-principles approach to build idiomaticity into BART using an adapter as a lightweight non-compositional language expert trained on idiomatic sentences. The improved capability over baselines (e.g., BART) is seen via intrinsic and extrinsic methods, where idiom embeddings score 0.19 points higher in homogeneity score for embedding clustering, and up to 25{\%} higher sequence accuracy on the idiom processing tasks of IE sense disambiguation and span detection. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00510 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,538 |
article | feder-etal-2022-causal | Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.66/ | Feder, Amir and Keith, Katherine A. and Manzoor, Emaad and Pryzant, Reid and Sridhar, Dhanya and Wood-Doughty, Zach and Eisenstein, Jacob and Grimmer, Justin and Reichart, Roi and Roberts, Margaret E. and Stewart, Brandon M. and Veitch, Victor and Yang, Diyi | null | 1138--1158 | A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks. This distinction is beginning to fade, with an emerging area of interdisciplinary research at the convergence of causal inference and language processing. Still, research on causality in NLP remains scattered across domains without unified definitions, benchmark datasets and clear articulations of the challenges and opportunities in the application of causal inference to the textual domain, with its unique properties. In this survey, we consolidate research across academic areas and situate it in the broader NLP landscape. We introduce the statistical challenge of estimating causal effects with text, encompassing settings where text is used as an outcome, treatment, or to address confounding. In addition, we explore potential uses of causal inference to improve the robustness, fairness, and interpretability of NLP models. We thus provide a unified overview of causal inference for the NLP community.1 | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00511 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,539 |
article | chowdhury-chaturvedi-2022-learning | Learning Fair Representations via Rate-Distortion Maximization | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.67/ | Chowdhury, Somnath Basu Roy and Chaturvedi, Snigdha | null | 1159--1174 | Text representations learned by machine learning models often encode undesirable demographic information of the user. Predictive models based on these representations can rely on such information, resulting in biased decisions. We present a novel debiasing technique, Fairness-aware Rate Maximization (FaRM), that removes protected information by making representations of instances belonging to the same protected attribute class uncorrelated, using the rate-distortion function. FaRM is able to debias representations with or without a target task at hand. FaRM can also be adapted to remove information about multiple protected attributes simultaneously. Empirical evaluations show that FaRM achieves state-of-the-art performance on several datasets, and learned representations leak significantly less protected attribute information against an attack by a non-linear probing network. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00512 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,540 |
article | heck-etal-2022-robust | Robust Dialogue State Tracking with Weak Supervision and Sparse Data | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.68/ | Heck, Michael and Lubis, Nurul and van Niekerk, Carel and Feng, Shutong and Geishauser, Christian and Lin, Hsien-Chin and Ga{\v{s}}i{\'c}, Milica | null | 1175--1192 | Generalizing dialogue state tracking (DST) to new data is especially challenging due to the strong reliance on abundant and fine-grained supervision during training. Sample sparsity, distributional shift, and the occurrence of new concepts and topics frequently lead to severe performance degradation during inference. In this paper we propose a training strategy to build extractive DST models without the need for fine-grained manual span labels. Two novel input-level dropout methods mitigate the negative impact of sample sparsity. We propose a new model architecture with a unified encoder that supports value as well as slot independence by leveraging the attention mechanism. We combine the strengths of triple copy strategy DST and value matching to benefit from complementary predictions without violating the principle of ontology independence. Our experiments demonstrate that an extractive DST model can be trained without manual span labels. Our architecture and training strategies improve robustness towards sample sparsity, new concepts, and topics, leading to state-of-the-art performance on a range of benchmarks. We further highlight our model`s ability to effectively learn from non-dialogue data. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00513 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,541 |
article | lovering-pavlick-2022-unit | Unit Testing for Concepts in Neural Networks | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.69/ | Lovering, Charles and Pavlick, Ellie | null | 1193--1208 | Many complex problems are naturally understood in terms of symbolic concepts. For example, our concept of {\textquotedblleft}cat{\textquotedblright} is related to our concepts of {\textquotedblleft}ears{\textquotedblright} and {\textquotedblleft}whiskers{\textquotedblright} in a non-arbitrary way. Fodor (1998) proposes one theory of concepts, which emphasizes symbolic representations related via constituency structures. Whether neural networks are consistent with such a theory is open for debate. We propose unit tests for evaluating whether a system`s behavior is consistent with several key aspects of Fodor`s criteria. Using a simple visual concept learning task, we evaluate several modern neural architectures against this specification. We find that models succeed on tests of groundedness, modularity, and reusability of concepts, but that important questions about causality remain open. Resolving these will require new methods for analyzing models' internal states. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00514 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,542 |
article | rotman-reichart-2022-multi | Multi-task Active Learning for Pre-trained Transformer-based Models | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.70/ | Rotman, Guy and Reichart, Roi | null | 1209--1228 | Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models.1 | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00515 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,543 |
article | bilal-etal-2022-template | Template-based Abstractive Microblog Opinion Summarization | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.71/ | Bilal, Iman Munire and Wang, Bo and Tsakalidis, Adam and Nguyen, Dong and Procter, Rob and Liakata, Maria | null | 1229--1248 | We introduce the task of microblog opinion summarization (MOS) and share a dataset of 3100 gold-standard opinion summaries to facilitate research in this domain. The dataset contains summaries of tweets spanning a 2-year period and covers more topics than any other public Twitter summarization dataset. Summaries are abstractive in nature and have been created by journalists skilled in summarizing news articles following a template separating factual information (main story) from author opinions. Our method differs from previous work on generating gold-standard summaries from social media, which usually involves selecting representative posts and thus favors extractive summarization models. To showcase the dataset`s utility and challenges, we benchmark a range of abstractive and extractive state-of-the-art summarization models and achieve good performance, with the former outperforming the latter. We also show that fine-tuning is necessary to improve performance and investigate the benefits of using different sample sizes. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00516 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,544 |
article | hou-etal-2022-meta | Meta-Learning the Difference: Preparing Large Language Models for Efficient Adaptation | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.72/ | Hou, Zejiang and Salazar, Julian and Polovets, George | null | 1249--1265 | Large pretrained language models (PLMs) are often domain- or task-adapted via finetuning or prompting. Finetuning requires modifying all of the parameters and having enough data to avoid overfitting while prompting requires no training and few examples but limits performance. Instead, we prepare PLMs for data- and parameter-efficient adaptation by learning to learn the difference between general and adapted PLMs. This difference is expressed in terms of model weights and sublayer structure through our proposed dynamic low-rank reparameterization and learned architecture controller. Experiments on few-shot dialogue completion, low-resource abstractive summarization, and multi-domain language modeling show improvements in adaptation time and performance over direct finetuning or preparation via domain-adaptive pretraining. Ablations show our task-adaptive reparameterization (TARP) and model search (TAMS) components individually improve on other parameter-efficient transfer like adapters and structure-learning methods like learned sparsification. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00517 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,545 |
article | yanaka-mineshima-2022-compositional | Compositional Evaluation on {J}apanese Textual Entailment and Similarity | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.73/ | Yanaka, Hitomi and Mineshima, Koji | null | 1266--1284 | Natural Language Inference (NLI) and Semantic Textual Similarity (STS) are widely used benchmark tasks for compositional evaluation of pre-trained language models. Despite growing interest in linguistic universals, most NLI/STS studies have focused almost exclusively on English. In particular, there are no available multilingual NLI/STS datasets in Japanese, which is typologically different from English and can shed light on the currently controversial behavior of language models in matters such as sensitivity to word order and case particles. Against this background, we introduce JSICK, a Japanese NLI/STS dataset that was manually translated from the English dataset SICK. We also present a stress-test dataset for compositional inference, created by transforming syntactic structures of sentences in JSICK to investigate whether language models are sensitive to word order and case particles. We conduct baseline experiments on different pre-trained language models and compare the performance of multilingual models when applied to Japanese and other languages. The results of the stress-test experiments suggest that the current pre-trained language models are insensitive to word order and case marking. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00518 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,546 |
article | sajjad-etal-2022-neuron | Neuron-level Interpretation of Deep {NLP} Models: A Survey | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.74/ | Sajjad, Hassan and Durrani, Nadir and Dalvi, Fahim | null | 1285--1303 | The proliferation of Deep Neural Networks in various domains has seen an increased need for interpretability of these models. Preliminary work done along this line, and papers that surveyed such, are focused on high-level representation analysis. However, a recent branch of work has concentrated on interpretability at a more granular level of analyzing neurons within these models. In this paper, we survey the work done on neuron analysis including: i) methods to discover and understand neurons in a network; ii) evaluation methods; iii) major findings including cross architectural comparisons that neuron analysis has unraveled; iv) applications of neuron probing such as: controlling the model, domain adaptation, and so forth; and v) a discussion on open issues and future research directions. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00519 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,547 |
article | wang-etal-2022-survey | A Survey on Cross-Lingual Summarization | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.75/ | Wang, Jiaan and Meng, Fandong and Zheng, Duo and Liang, Yunlong and Li, Zhixu and Qu, Jianfeng and Zhou, Jie | null | 1304--1323 | Cross-lingual summarization is the task of generating a summary in one language (e.g., English) for the given document(s) in a different language (e.g., Chinese). Under the globalization background, this task has attracted increasing attention of the computational linguistics community. Nevertheless, there still remains a lack of comprehensive review for this task. Therefore, we present the first systematic critical review on the datasets, approaches, and challenges in this field. Specifically, we carefully organize existing datasets and approaches according to different construction methods and solution paradigms, respectively. For each type of dataset or approach, we thoroughly introduce and summarize previous efforts and further compare them with each other to provide deeper analyses. In the end, we also discuss promising directions and offer our thoughts to facilitate future research. This survey is for both beginners and experts in cross-lingual summarization, and we hope it will serve as a starting point as well as a source of new ideas for researchers and engineers interested in this area. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00520 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,548 |
article | fang-xie-2022-end | An End-to-End Contrastive Self-Supervised Learning Framework for Language Understanding | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.76/ | Fang, Hongchao and Xie, Pengtao | null | 1324--1340 | Self-supervised learning (SSL) methods such as Word2vec, BERT, and GPT have shown great effectiveness in language understanding. Contrastive learning, as a recent SSL approach, has attracted increasing attention in NLP. Contrastive learning learns data representations by predicting whether two augmented data instances are generated from the same original data example. Previous contrastive learning methods perform data augmentation and contrastive learning separately. As a result, the augmented data may not be optimal for contrastive learning. To address this problem, we propose a four-level optimization framework that performs data augmentation and contrastive learning end-to-end, to enable the augmented data to be tailored to the contrastive learning task. This framework consists of four learning stages, including training machine translation models for sentence augmentation, pretraining a text encoder using contrastive learning, finetuning a text classification model, and updating weights of translation data by minimizing the validation loss of the classification model, which are performed in a unified way. Experiments on datasets in the GLUE benchmark (Wang et al., 2018a) and on datasets used in Gururangan et al. (2020) demonstrate the effectiveness of our method. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00521 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,549 |
article | lachmy-etal-2022-draw | Draw Me a Flower: Processing and Grounding Abstraction in Natural Language | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.77/ | Lachmy, Royi and Pyatkin, Valentina and Manevich, Avshalom and Tsarfaty, Reut | null | 1341--1356 | Abstraction is a core tenet of human cognition and communication. When composing natural language instructions, humans naturally evoke abstraction to convey complex procedures in an efficient and concise way. Yet, interpreting and grounding abstraction expressed in NL has not yet been systematically studied in NLP, with no accepted benchmarks specifically eliciting abstraction in NL. In this work, we set the foundation for a systematic study of processing and grounding abstraction in NLP. First, we deliver a novel abstraction elicitation method and present Hexagons, a 2D instruction-following game. Using Hexagons we collected over 4k naturally occurring visually-grounded instructions rich with diverse types of abstractions. From these data, we derive an instruction-to-execution task and assess different types of neural models. Our results show that contemporary models and modeling practices are substantially inferior to human performance, and that model performance is inversely correlated with the level of abstraction, showing less satisfying performance on higher levels of abstraction. These findings are consistent across models and setups, confirming that abstraction is a challenging phenomenon deserving further attention and study in NLP/AI research. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00522 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,550 |
article | jiang-marneffe-2022-investigating | Investigating Reasons for Disagreement in Natural Language Inference | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.78/ | Jiang, Nan-Jiang and de Marneffe, Marie-Catherine | null | 1357--1374 | We investigate how disagreement in natural language inference (NLI) annotation arises. We developed a taxonomy of disagreement sources with 10 categories spanning 3 high- level classes. We found that some disagreements are due to uncertainty in the sentence meaning, others to annotator biases and task artifacts, leading to different interpretations of the label distribution. We explore two modeling approaches for detecting items with potential disagreement: a 4-way classification with a {\textquotedblleft}Complicated{\textquotedblright} label in addition to the three standard NLI labels, and a multilabel classification approach. We found that the multilabel classification is more expressive and gives better recall of the possible interpretations in the data. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00523 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,551 |
article | bosc-vincent-2022-emergence | The Emergence of Argument Structure in Artificial Languages | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.79/ | Bosc, Tom and Vincent, Pascal | null | 1375--1391 | Computational approaches to the study of language emergence can help us understand how natural languages are shaped by cognitive and sociocultural factors. Previous work focused on tasks where agents refer to a single entity. In contrast, we study how agents predicate, that is, how they express that some relation holds between several entities. We introduce a setup where agents talk about a variable number of entities that can be partially observed by the listener. In the presence of a least-effort pressure, they tend to discuss only entities that are not observed by the listener. Thus we can obtain artificial phrases that denote a single entity, as well as artificial sentences that denote several entities. In natural languages, if we ignore the verb, phrases are usually concatenated, either in a specific order or by adding case markers to form sentences. Our setup allows us to quantify how much this holds in emergent languages using a metric we call concatenability. We also measure transitivity, which quantifies the importance of word order. We demonstrate the usefulness of this new setup and metrics for studying factors that influence argument structure. We compare agents having access to input representations structured into pre-segmented objects with properties, versus unstructured representations. Our results indicate that the awareness of object structure yields a more natural sentence organization. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00524 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,552 |
article | lauscher-etal-2022-scientia | Scientia Potentia {E}st{---}{O}n the Role of Knowledge in Computational Argumentation | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.80/ | Lauscher, Anne and Wachsmuth, Henning and Gurevych, Iryna and Glava{\v{s}}, Goran | null | 1392--1422 | Despite extensive research efforts in recent years, computational argumentation (CA) remains one of the most challenging areas of natural language processing. The reason for this is the inherent complexity of the cognitive processes behind human argumentation, which integrate a plethora of different types of knowledge, ranging from topic-specific facts and common sense to rhetorical knowledge. The integration of knowledge from such a wide range in CA requires modeling capabilities far beyond many other natural language understanding tasks. Existing research on mining, assessing, reasoning over, and generating arguments largely acknowledges that much more knowledge is needed to accurately model argumentation computationally. However, a systematic overview of the types of knowledge introduced in existing CA models is missing, hindering targeted progress in the field. Adopting the operational definition of knowledge as any task-relevant normative information not provided as input, the survey paper at hand fills this gap by (1) proposing a taxonomy of types of knowledge required in CA tasks, (2) systematizing the large body of CA work according to the reliance on and exploitation of these knowledge types for the four main research areas in CA, and (3) outlining and discussing directions for future research efforts in CA. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00525 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,553 |
article | sartran-etal-2022-transformer | Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.81/ | Sartran, Laurent and Barrett, Samuel and Kuncoro, Adhiguna and Stanojevi{\'c}, Milo{\v{s}} and Blunsom, Phil and Dyer, Chris | null | 1423--1439 | We introduce Transformer Grammars (TGs), a novel class of Transformer language models that combine (i) the expressive power, scalability, and strong performance of Transformers and (ii) recursive syntactic compositions, which here are implemented through a special attention mask and deterministic transformation of the linearized tree. We find that TGs outperform various strong baselines on sentence-level language modeling perplexity, as well as on multiple syntax-sensitive language modeling evaluation metrics. Additionally, we find that the recursive syntactic composition bottleneck which represents each sentence as a single vector harms perplexity on document-level language modeling, providing evidence that a different kind of memory mechanism{---}one that is independent of composed syntactic representations{---}plays an important role in current successful models of long text. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00526 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,554 |
article | calabrese-etal-2022-explainable | Explainable Abuse Detection as Intent Classification and Slot Filling | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.82/ | Calabrese, Agostina and Ross, Bj{\"orn and Lapata, Mirella | null | 1440--1454 | To proactively offer social media users a safe online experience, there is a need for systems that can detect harmful posts and promptly alert platform moderators. In order to guarantee the enforcement of a consistent policy, moderators are provided with detailed guidelines. In contrast, most state-of-the-art models learn what abuse is from labeled examples and as a result base their predictions on spurious cues, such as the presence of group identifiers, which can be unreliable. In this work we introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone. We propose a machine-friendly representation of the policy that moderators wish to enforce, by breaking it down into a collection of intents and slots. We collect and annotate a dataset of 3,535 English posts with such slots, and show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions.1 | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00527 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,555 |
article | goldman-tsarfaty-2022-morphology | Morphology Without Borders: Clause-Level Morphology | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.83/ | Goldman, Omer and Tsarfaty, Reut | null | 1455--1472 | Morphological tasks use large multi-lingual datasets that organize words into inflection tables, which then serve as training and evaluation data for various tasks. However, a closer inspection of these data reveals profound cross-linguistic inconsistencies, which arise from the lack of a clear linguistic and operational definition of what is a word, and which severely impair the universality of the derived tasks. To overcome this deficiency, we propose to view morphology as a clause-level phenomenon, rather than word-level. It is anchored in a fixed yet inclusive set of features, that encapsulates all functions realized in a saturated clause. We deliver MightyMorph, a novel dataset for clause-level morphology covering 4 typologically different languages: English, German, Turkish, and Hebrew. We use this dataset to derive 3 clause-level morphological tasks: inflection, reinflection and analysis. Our experiments show that the clause-level tasks are substantially harder than the respective word-level tasks, while having comparable complexity across languages. Furthermore, redefining morphology to the clause-level provides a neat interface with contextualized language models (LMs) and allows assessing the morphological knowledge encoded in these models and their usability for morphological tasks. Taken together, this work opens up new horizons in the study of computational morphology, leaving ample space for studying neural morphology cross-linguistically. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00528 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,556 |
article | dziri-etal-2022-faithdial | {F}aith{D}ial: A Faithful Benchmark for Information-Seeking Dialogue | Roark, Brian and Nenkova, Ani | null | 2022 | Cambridge, MA | MIT Press | https://aclanthology.org/2022.tacl-1.84/ | Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo M. and Reddy, Siva | null | 1473--1490 | The goal of information-seeking dialogue is to respond to seeker queries with natural language utterances that are grounded on knowledge sources. However, dialogue systems often produce unsupported utterances, a phenomenon known as hallucination. To mitigate this behavior, we adopt a data-centric solution and create FaithDial, a new benchmark for hallucination-free dialogues, by editing hallucinated responses in the Wizard of Wikipedia (WoW) benchmark. We observe that FaithDial is more faithful than WoW while also maintaining engaging conversations. We show that FaithDial can serve as training signal for: i) a hallucination critic, which discriminates whether an utterance is faithful or not, and boosts the performance by 12.8 F1 score on the BEGIN benchmark compared to existing datasets for dialogue coherence; ii) high-quality dialogue generation. We benchmark a series of state-of-the-art models and propose an auxiliary contrastive objective that achieves the highest level of faithfulness and abstractiveness based on several automated metrics. Further, we find that the benefits of FaithDial generalize to zero-shot transfer on other datasets, such as CMU-Dog and TopicalChat. Finally, human evaluation reveals that responses generated by models trained on FaithDial are perceived as more interpretable, cooperative, and engaging. | Transactions of the Association for Computational Linguistics | 10 | 10.1162/tacl_a_00529 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,557 |
inproceedings | le-etal-2022-efficient | Efficient Two-Stage Progressive Quantization of {BERT} | Fan, Angela and Gurevych, Iryna and Hou, Yufang and Kozareva, Zornitsa and Luccioni, Sasha and Sadat Moosavi, Nafise and Ravi, Sujith and Kim, Gyuwan and Schwartz, Roy and R{\"uckl{\'e, Andreas | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.sustainlp-1.2/ | Le, Charles and Ardakani, Arash and Ardakani, Amir and Zhang, Hang and Chen, Yuyan and Clark, James and Meyer, Brett and Gross, Warren | Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP) | 1--9 | The success of large BERT models has raised the demand for model compression methods to reduce model size and computational cost. Quantization can reduce the model size and inference latency, making inference more efficient, without changing its stucture, but it comes at the cost of performance degradation. Due to the complex loss landscape of ternarized/binarized BERT, we present an efficient two-stage progressive quantization method in which we fine tune the model with quantized weights and progressively lower its bits, and then we fine tune the model with quantized weights and activations. At the same time, we strategically choose which bitwidth to fine-tune on and to initialize from, and which bitwidth to fine-tune under augmented data to outperform the existing BERT binarization methods without adding an extra module, compressing the binary model 18{\%} more than previous binarization methods or compressing BERT by 31x w.r.t. to the full-precision model. Our method without data augmentation can outperform existing BERT ternarization methods. | null | null | 10.18653/v1/2022.sustainlp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,559 |
inproceedings | saeedizade-etal-2022-kgrefiner | {KGR}efiner: Knowledge Graph Refinement for Improving Accuracy of Translational Link Prediction Methods | Fan, Angela and Gurevych, Iryna and Hou, Yufang and Kozareva, Zornitsa and Luccioni, Sasha and Sadat Moosavi, Nafise and Ravi, Sujith and Kim, Gyuwan and Schwartz, Roy and R{\"uckl{\'e, Andreas | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.sustainlp-1.3/ | Saeedizade, Mohammad Javad and Torabian, Najmeh and Minaei-Bidgoli, Behrouz | Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP) | 10--16 | Link Prediction is the task of predicting missing relations between knowledge graph entities (KG). Recent work in link prediction mainly attempted to adapt a model to increase link prediction accuracy by using more layers in neural network architecture, which heavily rely on computational resources. This paper proposes the refinement of knowledge graphs to perform link prediction operations more accurately using relatively fast translational models. Translational link prediction models have significantly less complexity than deep learning approaches; this motivated us to improve their accuracy. Our method uses the ontologies of knowledge graphs to add information as auxiliary nodes to the graph. Then, these auxiliary nodes are connected to ordinary nodes of the KG that contain auxiliary information in their hierarchy. Our experiments show that our method can significantly increase the performance of translational link prediction methods in Hit@10, Mean Rank, and Mean Reciprocal Rank. | null | null | 10.18653/v1/2022.sustainlp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,560 |
inproceedings | ceron-etal-2022-algorithmic | Algorithmic Diversity and Tiny Models: Comparing Binary Networks and the Fruit Fly Algorithm on Document Representation Tasks | Fan, Angela and Gurevych, Iryna and Hou, Yufang and Kozareva, Zornitsa and Luccioni, Sasha and Sadat Moosavi, Nafise and Ravi, Sujith and Kim, Gyuwan and Schwartz, Roy and R{\"uckl{\'e, Andreas | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.sustainlp-1.4/ | Ceron, Tanise and Truong, Nhut and Herbelot, Aurelie | Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP) | 17--28 | Neural language models have seen a dramatic increase in size in the last years. While many still advocate that {\textquoteleft}bigger is better', work in model distillation has shown that the number of parameters used by very large networks is actually more than what is required for state-of-the-art performance. This prompts an obvious question: can we build smaller models from scratch, rather than going through the inefficient process of training at scale and subsequently reducing model size. In this paper, we investigate the behaviour of a biologically inspired algorithm, based on the fruit fly`s olfactory system. This algorithm has shown good performance in the past on the task of learning word embeddings. We now put it to the test on the task of semantic hashing. Specifically, we compare the fruit fly to a standard binary network on the task of generating locality-sensitive hashes for text documents, measuring both task performance and energy consumption. Our results indicate that the two algorithms have complementary strengths while showing similar electricity usage. | null | null | 10.18653/v1/2022.sustainlp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,561 |
inproceedings | flores-radev-2022-look | Look Ma, Only 400 Samples! Revisiting the Effectiveness of Automatic N-Gram Rule Generation for Spelling Normalization in {F}ilipino | Fan, Angela and Gurevych, Iryna and Hou, Yufang and Kozareva, Zornitsa and Luccioni, Sasha and Sadat Moosavi, Nafise and Ravi, Sujith and Kim, Gyuwan and Schwartz, Roy and R{\"uckl{\'e, Andreas | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.sustainlp-1.5/ | Flores, Lorenzo Jaime and Radev, Dragomir | Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP) | 29--35 | With 84.75 million Filipinos online, the ability for models to process online text is crucial for developing Filipino NLP applications. To this end, spelling correction is a crucial preprocessing step for downstream processing. However, the lack of data prevents the use of language models for this task. In this paper, we propose an N-Gram + Damerau-Levenshtein distance model with automatic rule extraction. We train the model on 300 samples, and show that despite limited training data, it achieves good performance and outperforms other deep learning approaches in terms of accuracy and edit distance. Moreover, the model (1) requires little compute power, (2) trains in little time, thus allowing for retraining, and (3) is easily interpretable, allowing for direct troubleshooting, highlighting the success of traditional approaches over more complex deep learning models in settings where data is unavailable. | null | null | 10.18653/v1/2022.sustainlp-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,562 |
inproceedings | kim-etal-2022-says | Who Says Elephants Can`t Run: Bringing Large Scale {M}o{E} Models into Cloud Scale Production | Fan, Angela and Gurevych, Iryna and Hou, Yufang and Kozareva, Zornitsa and Luccioni, Sasha and Sadat Moosavi, Nafise and Ravi, Sujith and Kim, Gyuwan and Schwartz, Roy and R{\"uckl{\'e, Andreas | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.sustainlp-1.6/ | Kim, Young Jin and Henry, Rawn and Fahim, Raffy and Hassan, Hany | Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP) | 36--43 | Mixture of Experts (MoE) models with conditional execution of sparsely activated layers has enabled training models with a much larger number of parameters. As a result, these models have achieved significantly better quality on various natural language processing tasks including machine translation. However, it remains challenging to deploy such models in real-life scenarios due to the large memory requirements and inefficient inference. In this work, we introduce a highly efficient inference framework with several optimization approaches to accelerate the computation of sparse models and cut down the memory consumption significantly. While we achieve up to 26x speed-up in terms of throughput, we also reduce the model size almost to one eighth of the original 32-bit float model by quantizing expert weights into 4-bit integers. As a result, we are able to deploy 136x larger models with 27{\%} less cost and significantly better quality with large scale MoE model deployment compared to the existing solutions. This enables a paradigm shift in deploying large scale multilingual MoE transformers models instead of distilling into dozens of smaller models per language or task. | null | null | 10.18653/v1/2022.sustainlp-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,563 |
inproceedings | thorne-2022-data | Data-Efficient Auto-Regressive Document Retrieval for Fact Verification | Fan, Angela and Gurevych, Iryna and Hou, Yufang and Kozareva, Zornitsa and Luccioni, Sasha and Sadat Moosavi, Nafise and Ravi, Sujith and Kim, Gyuwan and Schwartz, Roy and R{\"uckl{\'e, Andreas | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.sustainlp-1.7/ | Thorne, James | Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP) | 44--51 | Document retrieval is a core component of many knowledge-intensive natural language processing task formulations such as fact verification. Sources of textual knowledge such as Wikipedia articles condition the generation of answers from the models. Recent advances in retrieval use sequence-to-sequence models to incrementally predict the title of the appropriate Wikipedia page given an input instance. However, this method requires supervision in the form of human annotation to label which Wikipedia pages contain appropriate context. This paper introduces a distant-supervision method that does not require any annotation train auto-regressive retrievers that attain competitive R-Precision and Recall in a zero-shot setting. Furthermore we show that with task-specific supervised fine-tuning, auto-regressive retrieval performance for two Wikipedia-based fact verification tasks can approach or even exceed full supervision using less than $1/4$ of the annotated data. We release all code and models | null | null | 10.18653/v1/2022.sustainlp-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,564 |
inproceedings | dossou-etal-2022-afrolm | {A}fro{LM}: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 {A}frican Languages | Fan, Angela and Gurevych, Iryna and Hou, Yufang and Kozareva, Zornitsa and Luccioni, Sasha and Sadat Moosavi, Nafise and Ravi, Sujith and Kim, Gyuwan and Schwartz, Roy and R{\"uckl{\'e, Andreas | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.sustainlp-1.11/ | Dossou, Bonaventure F. P. and Tonja, Atnafu Lambebo and Yousuf, Oreen and Osei, Salomey and Oppong, Abigail and Shode, Iyanuoluwa and Awoyomi, Oluwabusayo Olufunke and Emezue, Chris | Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP) | 52--64 | In recent years, multilingual pre-trained language models have gained prominence due to their remarkable performance on numerous downstream Natural Language Processing tasks (NLP). However, pre-training these large multilingual language models requires a lot of training data, which is not available for African Languages. Active learning is a semi-supervised learning algorithm, in which a model consistently and dynamically learns to identify the most beneficial samples to train itself on, in order to achieve better optimization and performance on downstream tasks. Furthermore, active learning effectively and practically addresses real-world data scarcity. Despite all its benefits, active learning, in the context of NLP and especially multilingual language models pretraining, has received little consideration. In this paper, we present \textbf{AfroLM}, a multilingual language model pretrained from scratch on 23 African languages (the largest effort to date) using our novel self-active learning framework. Pretrained on a dataset significantly (14x) smaller than existing baselines, \textbf{AfroLM} outperforms many multilingual pretrained language models (AfriBERTa, XLMR-base, mBERT) on various NLP downstream tasks (NER, text classification, and sentiment analysis). Additional out-of-domain sentiment analysis experiments show that \textbf{AfroLM} is able to generalize well across various domains. We release the code source, and our datasets used in our framework at \url{https://github.com/bonaventuredossou/MLM_AL}. | null | null | 10.18653/v1/2022.sustainlp-1.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,565 |
inproceedings | han-etal-2022-towards-fair | Towards Fair Dataset Distillation for Text Classification | Fan, Angela and Gurevych, Iryna and Hou, Yufang and Kozareva, Zornitsa and Luccioni, Sasha and Sadat Moosavi, Nafise and Ravi, Sujith and Kim, Gyuwan and Schwartz, Roy and R{\"uckl{\'e, Andreas | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.sustainlp-1.13/ | Han, Xudong and Shen, Aili and Li, Yitong and Frermann, Lea and Baldwin, Timothy and Cohn, Trevor | Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP) | 65--72 | With the growing prevalence of large-scale language models, their energy footprint and potential to learn and amplify historical biases are two pressing challenges. Dataset distillation (DD) {---} a method for reducing the dataset size by learning a small number of synthetic samples which encode the information in the original dataset {---} is a method for reducing the cost of model training, however its impact on fairness has not been studied. We investigate how DD impacts on group bias, with experiments over two language classification tasks, concluding that vanilla DD preserves the bias of the dataset. We then show how existing debiasing methods can be combined with DD to produce models that are fair and accurate, at reduced training cost. | null | null | 10.18653/v1/2022.sustainlp-1.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,566 |
inproceedings | kumar-etal-2022-fabkg | {F}ab{KG}: A Knowledge graph of Manufacturing Science domain utilizing structured and unconventional unstructured knowledge source | Chen, Wenhu and Chen, Xinyun and Chen, Zhiyu and Yao, Ziyu and Yasunaga, Michihiro and Yu, Tao and Zhang, Rui | jul | 2022 | Seattle, USA | Association for Computational Linguistics | https://aclanthology.org/2022.suki-1.1/ | Kumar, Aman and Bharadwaj, Akshay and Starly, Binil and Lynch, Collin | Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI) | 1--8 | As the demands for large-scale information processing have grown, knowledge graph-based approaches have gained prominence for representing general and domain knowledge. The development of such general representations is essential, particularly in domains such as manufacturing which intelligent processes and adaptive education can enhance. Despite the continuous accumulation of text in these domains, the lack of structured data has created information extraction and knowledge transfer barriers. In this paper, we report on work towards developing robust knowledge graphs based upon entity and relation data for both commercial and educational uses. To create the FabKG (Manufacturing knowledge graph), we have utilized textbook index words, research paper keywords, FabNER (manufacturing NER), to extract a sub knowledge base contained within Wikidata. Moreover, we propose a novel crowdsourcing method for KG creation by leveraging student notes, which contain invaluable information but are not captured as meaningful information, excluding their use in personal preparation for learning and written exams. We have created a knowledge graph containing 65000+ triples using all data sources. We have also shown the use case of domain-specific question answering and expression/formula-based question answering for educational purposes. | null | null | 10.18653/v1/2022.suki-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,574 |
inproceedings | chen-etal-2022-modeling-compositionality | Modeling Compositionality with Dependency Graph for Dialogue Generation | Chen, Wenhu and Chen, Xinyun and Chen, Zhiyu and Yao, Ziyu and Yasunaga, Michihiro and Yu, Tao and Zhang, Rui | jul | 2022 | Seattle, USA | Association for Computational Linguistics | https://aclanthology.org/2022.suki-1.2/ | Chen, Xiaofeng and Chen, Yirong and Xing, Xiaofen and Xu, Xiangmin and Han, Wenjing and Tie, Qianfeng | Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI) | 9--16 | Because of the compositionality of natural language, syntactic structure which contains the information about the relationship between words is a key factor for semantic understanding. However, the widely adopted Transformer is hard to learn the syntactic structure effectively in dialogue generation tasks. To explicitly model the compositionaity of language in Transformer Block, we restrict the information flow between words by constructing directed dependency graph and propose Dependency Relation Attention (DRA). Experimental results demonstrate that DRA can further improve the performance of state-of-the-art models for dialogue generation. | null | null | 10.18653/v1/2022.suki-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,575 |
inproceedings | basu-etal-2022-strategies | Strategies to Improve Few-shot Learning for Intent Classification and Slot-Filling | Chen, Wenhu and Chen, Xinyun and Chen, Zhiyu and Yao, Ziyu and Yasunaga, Michihiro and Yu, Tao and Zhang, Rui | jul | 2022 | Seattle, USA | Association for Computational Linguistics | https://aclanthology.org/2022.suki-1.3/ | Basu, Samyadeep and Sharaf, Amr and Ip Kiun Chong, Karine and Fischer, Alex and Rohra, Vishal and Amoake, Michael and El-Hammamy, Hazem and Nosakhare, Ehi and Ramani, Vijay and Han, Benjamin | Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI) | 17--25 | Intent classification (IC) and slot filling (SF) are two fundamental tasks in modern Natural Language Understanding (NLU) systems. Collecting and annotating large amounts of data to train deep learning models for such systems are not scalable. This problem can be addressed by learning from few examples using fast supervised meta-learning techniques such as prototypical networks. In this work, we systematically investigate how contrastive learning and data augmentation methods can benefit these existing meta-learning pipelines for jointly modelled IC/SF tasks. Through extensive experiments across standard IC/SF benchmarks (SNIPS and ATIS), we show that our proposed approaches outperform standard meta-learning methods: contrastive losses as a regularizer in conjunction with prototypical networks consistently outperform the existing state-of-the-art for both IC and SF tasks, while data augmentation strategies primarily improve few-shot IC by a significant margin | null | null | 10.18653/v1/2022.suki-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,576 |
inproceedings | noriega-atala-etal-2022-learning | Learning Open Domain Multi-hop Search Using Reinforcement Learning | Chen, Wenhu and Chen, Xinyun and Chen, Zhiyu and Yao, Ziyu and Yasunaga, Michihiro and Yu, Tao and Zhang, Rui | jul | 2022 | Seattle, USA | Association for Computational Linguistics | https://aclanthology.org/2022.suki-1.4/ | Noriega-Atala, Enrique and Surdeanu, Mihai and Morrison, Clayton | Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI) | 26--35 | We propose a method to teach an automated agent to learn how to search for multi-hop paths of relations between entities in an open domain. The method learns a policy for directing existing information retrieval and machine reading resources to focus on relevant regions of a corpus. The approach formulates the learning problem as a Markov decision process with a state representation that encodes the dynamics of the search process and a reward structure that minimizes the number of documents that must be processed while still finding multi-hop paths. We implement the method in an actor-critic reinforcement learning algorithm and evaluate it on a dataset of search problems derived from a subset of English Wikipedia. The algorithm finds a family of policies that succeeds in extracting the desired information while processing fewer documents compared to several baseline heuristic algorithms. | null | null | 10.18653/v1/2022.suki-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,577 |
inproceedings | wang-etal-2022-table | Table Retrieval May Not Necessitate Table-specific Model Design | Chen, Wenhu and Chen, Xinyun and Chen, Zhiyu and Yao, Ziyu and Yasunaga, Michihiro and Yu, Tao and Zhang, Rui | jul | 2022 | Seattle, USA | Association for Computational Linguistics | https://aclanthology.org/2022.suki-1.5/ | Wang, Zhiruo and Jiang, Zhengbao and Nyberg, Eric and Neubig, Graham | Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI) | 36--46 | Tables are an important form of structured data for both human and machine readers alike, providing answers to questions that cannot, or cannot easily, be found in texts. Recent work has designed special models and training paradigms for table-related tasks such as table-based question answering and table retrieval. Though effective, they add complexity in both modeling and data acquisition compared to generic text solutions and obscure which elements are truly beneficial. In this work, we focus on the task of table retrieval, and ask: {\textquotedblleft}is table-specific model design necessary for table retrieval, or can a simpler text-based model be effectively used to achieve a similar result?'' First, we perform an analysis on a table-based portion of the Natural Questions dataset (NQ-table), and find that structure plays a negligible role in more than 70{\%} of the cases. Based on this, we experiment with a general Dense Passage Retriever (DPR) based on text and a specialized Dense Table Retriever (DTR) that uses table-specific model designs. We find that DPR performs well without any table-specific design and training, and even achieves superior results compared to DTR when fine-tuned on properly linearized tables. We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases. However, none of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval. | null | null | 10.18653/v1/2022.suki-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,578 |
inproceedings | montella-etal-2022-transfer | Transfer Learning and Masked Generation for Answer Verbalization | Chen, Wenhu and Chen, Xinyun and Chen, Zhiyu and Yao, Ziyu and Yasunaga, Michihiro and Yu, Tao and Zhang, Rui | jul | 2022 | Seattle, USA | Association for Computational Linguistics | https://aclanthology.org/2022.suki-1.6/ | Montella, Sebastien and Rojas-Barahona, Lina and Bechet, Frederic and Heinecke, Johannes and Nasr, Alexis | Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI) | 47--54 | Structured Knowledge has recently emerged as an essential component to support fine-grained Question Answering (QA). In general, QA systems query a Knowledge Base (KB) to detect and extract the raw answers as final prediction. However, as lacking of context, language generation can offer a much informative and complete response. In this paper, we propose to combine the power of transfer learning and the advantage of entity placeholders to produce high-quality verbalization of extracted answers from a KB. We claim that such approach is especially well-suited for answer generation. Our experiments show 44.25{\%}, 3.26{\%} and 29.10{\%} relative gain in BLEU over the state-of-the-art on the VQuAnDA, ParaQA and VANiLLa datasets, respectively. We additionally provide minor hallucinations corrections in VANiLLa standing for 5{\%} of each of the training and testing set. We witness a median absolute gain of 0.81 SacreBLEU. This strengthens the importance of data quality when using automated evaluation. | null | null | 10.18653/v1/2022.suki-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,579 |
inproceedings | mo-etal-2022-knowledge | Knowledge Transfer between Structured and Unstructured Sources for Complex Question Answering | Chen, Wenhu and Chen, Xinyun and Chen, Zhiyu and Yao, Ziyu and Yasunaga, Michihiro and Yu, Tao and Zhang, Rui | jul | 2022 | Seattle, USA | Association for Computational Linguistics | https://aclanthology.org/2022.suki-1.7/ | Mo, Lingbo and Wang, Zhen and Zhao, Jie and Sun, Huan | Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI) | 55--66 | Multi-hop question answering (QA) combines multiple pieces of evidence to search for the correct answer. Reasoning over a text corpus (TextQA) and/or a knowledge base (KBQA) has been extensively studied and led to distinct system architectures. However, knowledge transfer between such two QA systems has been under-explored. Research questions like what knowledge is transferred or whether the transferred knowledge can help answer over one source using another one, are yet to be answered. In this paper, therefore, we study the knowledge transfer of multi-hop reasoning between structured and unstructured sources. We first propose a unified QA framework named SimultQA to enable knowledge transfer and bridge the distinct supervisions from KB and text sources. Then, we conduct extensive analyses to explore how knowledge is transferred by leveraging the pre-training and fine-tuning paradigm. We focus on the low-resource fine-tuning to show that pre-training SimultQA on one source can substantially improve its performance on the other source. More fine-grained analyses on transfer behaviors reveal the types of transferred knowledge and transfer patterns. We conclude with insights into how to construct better QA datasets and systems to exploit knowledge transfer for future work. | null | null | 10.18653/v1/2022.suki-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,580 |
inproceedings | zhou-etal-2022-hierarchical-control | Hierarchical Control of Situated Agents through Natural Language | Chen, Wenhu and Chen, Xinyun and Chen, Zhiyu and Yao, Ziyu and Yasunaga, Michihiro and Yu, Tao and Zhang, Rui | jul | 2022 | Seattle, USA | Association for Computational Linguistics | https://aclanthology.org/2022.suki-1.8/ | Zhou, Shuyan and Yin, Pengcheng and Neubig, Graham | Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI) | 67--84 | When humans perform a particular task, they do so hierarchically: splitting higher-level tasks into smaller sub-tasks. However, most works on natural language (NL) command of situated agents have treated the procedures to be executed as flat sequences of simple actions, or any hierarchies of procedures have been shallow at best. In this paper, we propose a formalism of procedures as programs, a method for representing hierarchical procedural knowledge for agent command and control aimed at enabling easy application to various scenarios. We further propose a modeling paradigm of hierarchical modular networks, which consist of a planner and reactors that convert NL intents to predictions of executable programs and probe the environment for information necessary to complete the program execution. We instantiate this framework on the IQA and ALFRED datasets for NL instruction following. Our model outperforms reactive baselines by a large margin on both datasets. We also demonstrate that our framework is more data-efficient, and that it allows for fast iterative development. | null | null | 10.18653/v1/2022.suki-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,581 |
inproceedings | sancheti-rudinger-2022-large | What do Large Language Models Learn about Scripts? | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.1/ | Sancheti, Abhilasha and Rudinger, Rachel | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 1--11 | Script Knowledge (Schank and Abelson, 1975) has long been recognized as crucial for language understanding as it can help in filling in unstated information in a narrative. However, such knowledge is expensive to produce manually and difficult to induce from text due to reporting bias (Gordon and Van Durme, 2013). In this work, we are interested in the scientific question of whether explicit script knowledge is present and accessible through pre-trained generative language models (LMs). To this end, we introduce the task of generating full event sequence descriptions (ESDs) given a scenario as a natural language prompt. Through zero-shot probing, we find that generative LMs produce poor ESDs with mostly omitted, irrelevant, repeated or misordered events. To address this, we propose a pipeline-based script induction framework (SIF) which can generate good quality ESDs for unseen scenarios (e.g., bake a cake). SIF is a two-staged framework that fine-tunes LM on a small set of ESD examples in the first stage. In the second stage, ESD generated for an unseen scenario is post-processed using RoBERTa-based models to filter irrelevant events, remove repetitions, and reorder the temporally misordered events. Through automatic and manual evaluations, we demonstrate that SIF yields substantial improvements (1-3 BLEU points) over a fine-tuned LM. However, manual analysis shows that there is great room for improvement, offering a new research direction for inducing script knowledge. | null | null | 10.18653/v1/2022.starsem-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,583 |
inproceedings | betz-richardson-2022-deepa2 | {D}eep{A}2: A Modular Framework for Deep Argument Analysis with Pretrained Neural {T}ext2{T}ext Language Models | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.2/ | Betz, Gregor and Richardson, Kyle | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 12--27 | In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst {--} a T5 model [Raffel et al. 2020] set up and trained within DeepA2 {--} reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank [Dalvi et al. 2021]. Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model`s uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence. | null | null | 10.18653/v1/2022.starsem-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,584 |
inproceedings | slobodkin-etal-2022-semantics | Semantics-aware Attention Improves Neural Machine Translation | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.3/ | Slobodkin, Aviv and Choshen, Leshem and Abend, Omri | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 28--43 | The integration of syntactic structures into Transformer machine translation has shown positive results, but to our knowledge, no work has attempted to do so with semantic structures. In this work we propose two novel parameter-free methods for injecting semantic information into Transformers, both rely on semantics-aware masking of (some of) the attention heads. One such method operates on the encoder, through a Scene-Aware Self-Attention (SASA) head. Another on the decoder, through a Scene-Aware Cross-Attention (SACrA) head. We show a consistent improvement over the vanilla Transformer and syntax-aware models for four language pairs. We further show an additional gain when using both semantic and syntactic structures in some language pairs. | null | null | 10.18653/v1/2022.starsem-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,585 |
inproceedings | weissenhorn-etal-2022-compositional | Compositional generalization with a broad-coverage semantic parser | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.4/ | Wei{\ss}enhorn, Pia and Donatelli, Lucia and Koller, Alexander | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 44--54 | We show how the AM parser, a compositional semantic parser (Groschwitz et al., 2018) can solve compositional generalization on the COGS dataset. It is the first semantic parser that achieves high accuracy on both naturally occurring language and the synthetic COGS dataset. We discuss implications for corpus and model design for learning human-like generalization. Our results suggest that compositional generalization can be best achieved by building compositionality into semantic parsers. | null | null | 10.18653/v1/2022.starsem-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,586 |
inproceedings | ryb-etal-2022-analog | {A}na{L}og: Testing Analytical and Deductive Logic Learnability in Language Models | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.5/ | Ryb, Samuel and Giulianelli, Mario and Sinclair, Arabella and Fern{\'a}ndez, Raquel | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 55--68 | We investigate the extent to which pre-trained language models acquire analytical and deductive logical reasoning capabilities as a side effect of learning word prediction. We present AnaLog, a natural language inference task designed to probe models for these capabilities, controlling for different invalid heuristics the models may adopt instead of learning the desired generalisations. We test four languagemodels on AnaLog, finding that they have all learned, to a different extent, to encode information that is predictive of entailment beyond shallow heuristics such as lexical overlap and grammaticality. We closely analyse the best performing language model and show that while it performs more consistently than other language models across logical connectives and reasoning domains, it still is sensitive to lexical and syntactic variations in the realisation of logical statements. | null | null | 10.18653/v1/2022.starsem-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,587 |
inproceedings | yu-etal-2022-pairwise | Pairwise Representation Learning for Event Coreference | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.6/ | Yu, Xiaodong and Yin, Wenpeng and Roth, Dan | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 69--78 | Natural Language Processing tasks such as resolving the coreference of events require understanding the relations between two text snippets. These tasks are typically formulated as (binary) classification problems over independently induced representations of the text snippets. In this work, we develop a Pairwise Representation Learning (PairwiseRL) scheme for the event mention pairs, in which we jointly encode a pair of text snippets so that the representation of each mention in the pair is induced in the context of the other one. Furthermore, our representation supports a finer, structured representation of the text snippet to facilitate encoding events and their arguments. We show that PairwiseRL, despite its simplicity, outperforms the prior state-of-the-art event coreference systems on both cross-document and within-document event coreference benchmarks. We also conduct in-depth analysis in terms of the improvement and the limitation of pairwise representation so as to provide insights for future work. | null | null | 10.18653/v1/2022.starsem-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,588 |
inproceedings | stolfo-etal-2022-simple | A Simple Unsupervised Approach for Coreference Resolution using Rule-based Weak Supervision | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.7/ | Stolfo, Alessandro and Tanner, Chris and Gupta, Vikram and Sachan, Mrinmaya | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 79--88 | Labeled data for the task of Coreference Resolution is a scarce resource, requiring significant human effort. While state-of-the-art coreference models rely on such data, we propose an approach that leverages an end-to-end neural model in settings where labeled data is unavailable. Specifically, using weak supervision, we transfer the linguistic knowledge encoded by Stanford?s rule-based coreference system to the end-to-end model, which jointly learns rich, contextualized span representations and coreference chains. Our experiments on the English OntoNotes corpus demonstrate that our approach effectively benefits from the noisy coreference supervision, producing an improvement over Stanford?s rule-based system (+3.7 F1) and outperforming the previous best unsupervised model (+0.9 F1). Additionally, we validate the efficacy of our method on two other datasets: PreCo and Litbank (+2.5 and +5 F1 on Stanford`s system, respectively). | null | null | 10.18653/v1/2022.starsem-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,589 |
inproceedings | espinosa-anke-etal-2022-multilingual | Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.8/ | Espinosa Anke, Luis and Shvets, Alexander and Mohammadshahi, Alireza and Henderson, James and Wanner, Leo | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 89--100 | Recognizing and categorizing lexical collocations in context is useful for language learning, dictionary compilation and downstream NLP. However, it is a challenging task due to the varying degrees of frozenness lexical collocations exhibit. In this paper, we put forward a sequence tagging BERT-based model enhanced with a graph-aware transformer architecture, which we evaluate on the task of collocation recognition in context. Our results suggest that explicitly encoding syntactic dependencies in the model architecture is helpful, and provide insights on differences in collocation typification in English, Spanish and French. | null | null | 10.18653/v1/2022.starsem-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,590 |
inproceedings | tamari-etal-2022-dyna | {D}yna-b{A}b{I}: unlocking b{A}b{I}`s potential with dynamic synthetic benchmarking | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.9/ | Tamari, Ronen and Richardson, Kyle and Kahlon, Noam and Sar-shalom, Aviad and Liu, Nelson F. and Tsarfaty, Reut and Shahaf, Dafna | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 101--122 | While neural language models often perform surprisingly well on natural language understanding (NLU) tasks, their strengths and limitations remain poorly understood. Controlled synthetic tasks are thus an increasingly important resource for diagnosing model behavior. In this work we focus on story understanding, a core competency for NLU systems. However, the main synthetic resource for story understanding, the bAbI benchmark, lacks such a systematic mechanism for controllable task generation. We develop Dyna-bAbI, a dynamic framework providing fine-grained control over task generation in bAbI. We demonstrate our ideas by constructing three new tasks requiring compositional generalization, an important evaluation setting absent from the original benchmark. We tested both special-purpose models developed for bAbI as well as state-of-the-art pre-trained methods, and found that while both approaches solve the original tasks (99{\%} accuracy), neither approach succeeded in the compositional generalization setting, indicating the limitations of the original training data. We explored ways to augment the original data, and found that though diversifying training data was far more useful than simply increasing dataset size, it was still insufficient for driving robust compositional generalization (with 70{\%} accuracy for complex compositions). Our results underscore the importance of highly controllable task generators for creating robust NLU systems through a virtuous cycle of model and data development. | null | null | 10.18653/v1/2022.starsem-1.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,591 |
inproceedings | soper-koenig-2022-polysemy | When Polysemy Matters: Modeling Semantic Categorization with Word Embeddings | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.10/ | Soper, Elizabeth and Koenig, Jean-pierre | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 123--131 | Recent work using word embeddings to model semantic categorization have indicated that static models outperform the more recent contextual class of models (Majewska et al, 2021). In this paper, we consider polysemy as a possible confounding factor, comparing sense-level embeddings with previously studied static embeddings on both coarse- and fine-grained categorization tasks. We find that the effect of polysemy depends on how one defines semantic categorization; while sense-level embeddings dramatically outperform static embeddings in predicting coarse-grained categories derived from a word sorting task, they perform approximately equally in predicting fine-grained categories derived from context-free similarity judgments. Our findings highlight the different processes underlying human behavior on different types of semantic tasks. | null | null | 10.18653/v1/2022.starsem-1.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,592 |
inproceedings | pouran-ben-veyseh-nguyen-2022-word | Word-Label Alignment for Event Detection: A New Perspective via Optimal Transport | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.11/ | Pouran Ben Veyseh, Amir and Nguyen, Thien | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 132--138 | Event Detection (ED) aims to identify mentions/triggers of real world events in text. In the literature, this task is modeled as a sequence-labeling or word-prediction problem. In this work, we present a novel formulation in which ED is modeled as a word-label alignment task. In particular, given the words in a sentence and possible event types, the objective is to infer an alignment matrix in which event trigger words are aligned with the most likely event types. Moreover, we show that this new perspective facilitates the incorporation of word-label alignment biases to improve alignment matrix for ED. Novel alignment biases and Optimal Transport are introduced to solve our alignment problem for ED. We conduct experiments on a benchmark dataset to demonstrate the effectiveness of the proposed model for ED. | null | null | 10.18653/v1/2022.starsem-1.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,593 |
inproceedings | tsukagoshi-etal-2022-comparison | Comparison and Combination of Sentence Embeddings Derived from Different Supervision Signals | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.12/ | Tsukagoshi, Hayato and Sasano, Ryohei and Takeda, Koichi | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 139--150 | There have been many successful applications of sentence embedding methods. However, it has not been well understood what properties are captured in the resulting sentence embeddings depending on the supervision signals. In this paper, we focus on two types of sentence embedding methods with similar architectures and tasks: one fine-tunes pre-trained language models on the natural language inference task, and the other fine-tunes pre-trained language models on word prediction task from its definition sentence, and investigate their properties. Specifically, we compare their performances on semantic textual similarity (STS) tasks using STS datasets partitioned from two perspectives: 1) sentence source and 2) superficial similarity of the sentence pairs, and compare their performances on the downstream and probing tasks. Furthermore, we attempt to combine the two methods and demonstrate that combining the two methods yields substantially better performance than the respective methods on unsupervised STS tasks and downstream tasks. | null | null | 10.18653/v1/2022.starsem-1.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,594 |
inproceedings | jain-espinosa-anke-2022-distilling | Distilling Hypernymy Relations from Language Models: On the Effectiveness of Zero-Shot Taxonomy Induction | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.13/ | Jain, Devansh and Espinosa Anke, Luis | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 151--156 | In this paper, we analyze zero-shot taxonomy learning methods which are based on distilling knowledge from language models via prompting and sentence scoring. We show that, despite their simplicity, these methods outperform some supervised strategies and are competitive with the current state-of-the-art under adequate conditions. We also show that statistical and linguistic properties of prompts dictate downstream performance. | null | null | 10.18653/v1/2022.starsem-1.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,595 |
inproceedings | zeidler-etal-2022-dynamic | A Dynamic, Interpreted {C}heck{L}ist for Meaning-oriented {NLG} Metric Evaluation {--} through the Lens of Semantic Similarity Rating | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.14/ | Zeidler, Laura and Opitz, Juri and Frank, Anette | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 157--172 | Evaluating the quality of generated text is difficult, since traditional NLG evaluation metrics, focusing more on surface form than meaning, often fail to assign appropriate scores. This is especially problematic for AMR-to-text evaluation, given the abstract nature of AMR.Our work aims to support the development and improvement of NLG evaluation metrics that focus on meaning by developing a dynamic CheckList for NLG metrics that is interpreted by being organized around meaning-relevant linguistic phenomena. Each test instance consists of a pair of sentences with their AMR graphs and a human-produced textual semantic similarity or relatedness score. Our CheckList facilitates comparative evaluation of metrics and reveals strengths and weaknesses of novel and traditional metrics. We demonstrate the usefulness of CheckList by designing a new metric GraCo that computes lexical cohesion graphs over AMR concepts. Our analysis suggests that GraCo presents an interesting NLG metric worth future investigation and that meaning-oriented NLG metrics can profit from graph-based metric components using AMR. | null | null | 10.18653/v1/2022.starsem-1.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,596 |
inproceedings | anderson-camacho-collados-2022-assessing | Assessing the Limits of the Distributional Hypothesis in Semantic Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.15/ | Anderson, Mark and Camacho-collados, Jose | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 173--185 | The increase in performance in NLP due to the prevalence of distributional models and deep learning has brought with it a reciprocal decrease in interpretability. This has spurred a focus on what neural networks learn about natural language with less of a focus on how. Some work has focused on the data used to develop data-driven models, but typically this line of work aims to highlight issues with the data, e.g. highlighting and offsetting harmful biases. This work contributes to the relatively untrodden path of what is required in data for models to capture meaningful representations of natural language. This is entails evaluating how well English and Spanish semantic spaces capture a particular type of relational knowledge, namely the traits associated with concepts (e.g. bananas-yellow), and exploring the role of co-occurrences in this context. | null | null | 10.18653/v1/2022.starsem-1.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,597 |
inproceedings | asael-etal-2022-generative | A Generative Approach for Mitigating Structural Biases in Natural Language Inference | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.16/ | Asael, Dimion and Ziegler, Zachary and Belinkov, Yonatan | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 186--199 | Many natural language inference (NLI) datasets contain biases that allow models to perform well by only using a biased subset of the input, without considering the remainder features. For instance, models are able to classify samples by only using the hypothesis, without learning the true relationship between it and the premise. These structural biases lead discriminative models to learn unintended superficial features and generalize poorly out of the training distribution. In this work, we reformulate the NLI task as a generative task, where a model is conditioned on the biased subset of the input and the label and generates the remaining subset of the input. We show that by imposing a uniform prior, we obtain a provably unbiased model. Through synthetic experiments, we find that this approach is highly robust to large amounts of bias. We then demonstrate empirically on two types of natural bias that this approach leads to fully unbiased models in practice. However, we find that generative models are difficult to train and generally perform worse than discriminative baselines. We highlight the difficulty of the generative modeling task in the context of NLI as a cause for this worse performance. Finally, by fine-tuning the generative model with a discriminative objective, we reduce the performance gap between the generative model and the discriminative baseline, while allowing for a small amount of bias. | null | null | 10.18653/v1/2022.starsem-1.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,598 |
inproceedings | locatelli-quattoni-2022-measuring | Measuring Alignment Bias in Neural Seq2seq Semantic Parsers | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.17/ | Locatelli, Davide and Quattoni, Ariadna | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 200--207 | Prior to deep learning the semantic parsing community has been interested in understanding and modeling the range of possible word alignments between natural language sentences and their corresponding meaning representations. Sequence-to-sequence models changed the research landscape suggesting that we no longer need to worry about alignments since they can be learned automatically by means of an attention mechanism. More recently, researchers have started to question such premise. In this work we investigate whether seq2seq models can handle both simple and complex alignments. To answer this question we augment the popular Geo semantic parsing dataset with alignment annotations and create Geo-Aligned. We then study the performance of standard seq2seq models on the examples that can be aligned monotonically versus examples that require more complex alignments. Our empirical study shows that performance is significantly better over monotonic alignments. | null | null | 10.18653/v1/2022.starsem-1.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,599 |
inproceedings | blair-stanek-van-durme-2022-improved | Improved Induction of Narrative Chains via Cross-Document Relations | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.18/ | Blair-stanek, Andrew and Van Durme, Benjamin | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 208--212 | The standard approach for inducing narrative chains considers statistics gathered per individual document. We consider whether statistics gathered using cross-document relations can lead to improved chain induction. Our study is motivated by legal narratives, where cases typically cite thematically similar cases. We consider four novel variations on pointwise mutual information (PMI), each accounting for cross-document relations in a different way. One proposed PMI variation performs 58{\%} better relative to standard PMI on recall@50 and induces qualitatively better narrative chains. | null | null | 10.18653/v1/2022.starsem-1.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,600 |
inproceedings | shen-evang-2022-drs | {DRS} Parsing as Sequence Labeling | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.19/ | Shen, Minxing and Evang, Kilian | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 213--225 | We present the first fully trainable semantic parser for English, German, Italian, and Dutch discourse representation structures (DRSs) that is competitive in accuracy with recent sequence-to-sequence models and at the same time \textit{compositional} in the sense that the output maps each token to one of a finite set of meaning \textit{fragments}, and the meaning of the utterance is a function of the meanings of its parts. We argue that this property makes the system more transparent and more useful for human-in-the-loop annotation. We achieve this simply by casting DRS parsing as a sequence labeling task, where tokens are labeled with both fragments (lists of abstracted clauses with relative referent indices indicating unification) and \textit{symbols} like word senses or names. We give a comprehensive error analysis that highlights areas for future work. | null | null | 10.18653/v1/2022.starsem-1.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,601 |
inproceedings | talman-etal-2022-data | How Does Data Corruption Affect Natural Language Understanding Models? A Study on {GLUE} datasets | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.20/ | Talman, Aarne and Apidianaki, Marianna and Chatzikyriakidis, Stergios and Tiedemann, J{\"org | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 226--233 | A central question in natural language understanding (NLU) research is whether high performance demonstrates the models' strong reasoning capabilities. We present an extensive series of controlled experiments where pre-trained language models are exposed to data that have undergone specific corruption transformations. These involve removing instances of specific word classes and often lead to non-sensical sentences. Our results show that performance remains high on most GLUE tasks when the models are fine-tuned or tested on corrupted data, suggesting that they leverage other cues for prediction even in non-sensical contexts. Our proposed data transformations can be used to assess the extent to which a specific dataset constitutes a proper testbed for evaluating models' language understanding capabilities. | null | null | 10.18653/v1/2022.starsem-1.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,602 |
inproceedings | takahashi-etal-2022-leveraging | Leveraging Three Types of Embeddings from Masked Language Models in Idiom Token Classification | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.21/ | Takahashi, Ryosuke and Sasano, Ryohei and Takeda, Koichi | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 234--239 | Many linguistic expressions have idiomatic and literal interpretations, and the automatic distinction of these two interpretations has been studied for decades. Recent research has shown that contextualized word embeddings derived from masked language models (MLMs) can give promising results for idiom token classification. This indicates that contextualized word embedding alone contains information about whether the word is being used in a literal sense or not. However, we believe that more types of information can be derived from MLMs and that leveraging such information can improve idiom token classification. In this paper, we leverage three types of embeddings from MLMs; uncontextualized token embeddings and masked token embeddings in addition to the standard contextualized word embeddings and show that the newly added embeddings significantly improve idiom token classification for both English and Japanese datasets. | null | null | 10.18653/v1/2022.starsem-1.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,603 |
inproceedings | gao-etal-2022-makes | {\textquotedblleft}What makes a question inquisitive?{\textquotedblright} A Study on Type-Controlled Inquisitive Question Generation | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.22/ | Gao, Lingyu and Ghosh, Debanjan and Gimpel, Kevin | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 240--257 | We propose a type-controlled framework for inquisitive question generation. We annotate an inquisitive question dataset with question types, train question type classifiers, and finetune models for type-controlled question generation. Empirical results demonstrate that we can generate a variety of questions that adhere to specific types while drawing from the source texts. We also investigate strategies for selecting a single question from a generated set, considering both an informative vs. inquisitive question classifier and a pairwise ranker trained from a small set of expert annotations. Question selection using the pairwise ranker yields strong results in automatic and manual evaluation. Our human evaluation assesses multiple aspects of the generated questions, finding that the ranker chooses questions with the best syntax (4.59), semantics (4.37), and inquisitiveness (3.92) on a scale of 1-5, even rivaling the performance of human-written questions. | null | null | 10.18653/v1/2022.starsem-1.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,604 |
inproceedings | merullo-etal-2022-pretraining | Pretraining on Interactions for Learning Grounded Affordance Representations | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.23/ | Merullo, Jack and Ebert, Dylan and Eickhoff, Carsten and Pavlick, Ellie | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 258--277 | Lexical semantics and cognitive science point to affordances (i.e. the actions that objects support) as critical for understanding and representing nouns and verbs. However, study of these semantic features has not yet been integrated with the ?foundation? models that currently dominate language representation research. We hypothesize that predictive modeling of object state over time will result in representations that encode object affordance information ?for free?. We train a neural network to predict objects? trajectories in a simulated interaction and show that our network?s latent representations differentiate between both observed and unobserved affordances. We find that models trained using 3D simulations outperform conventional 2D computer vision models trained on a similar task, and, on initial inspection, that differences between concepts correspond to expected features (e.g., roll entails rotation) . Our results suggest a way in which modern deep learning approaches to grounded language learning can be integrated with traditional formal semantic notions of lexical representations. | null | null | 10.18653/v1/2022.starsem-1.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,605 |
inproceedings | pradhan-etal-2022-propbank | {P}rop{B}ank Comes of {A}ge{---}{L}arger, Smarter, and more Diverse | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.24/ | Pradhan, Sameer and Bonn, Julia and Myers, Skatje and Conger, Kathryn and O{'}gorman, Tim and Gung, James and Wright-bettner, Kristin and Palmer, Martha | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 278--288 | This paper describes the evolution of the PropBank approach to semantic role labeling over the last two decades. During this time the PropBank frame files have been expanded to include non-verbal predicates such as adjectives, prepositions and multi-word expressions. The number of domains, genres and languages that have been PropBanked has also expanded greatly, creating an opportunity for much more challenging and robust testing of the generalization capabilities of PropBank semantic role labeling systems. We also describe the substantial effort that has gone into ensuring the consistency and reliability of the various annotated datasets and resources, to better support the training and evaluation of such systems | null | null | 10.18653/v1/2022.starsem-1.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,606 |
inproceedings | enzo-etal-2022-speech | Speech acts and Communicative Intentions for Urgency Detection | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.25/ | Enzo, Laurenti and Nils, Bourgon and Benamara, Farah and Alda, Mari and Moriceau, V{\'e}ronique and Camille, Courgeon | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 289--298 | Recognizing speech acts (SA) is crucial for capturing meaning beyond what is said, making communicative intentions particularly relevant to identify urgent messages. This paper attempts to measure for the first time the impact of SA on urgency detection during crises,006in tweets. We propose a new dataset annotated for both urgency and SA, and develop several deep learning architectures to inject SA into urgency detection while ensuring models generalisability. Our results show that taking speech acts into account in tweet analysis improves information type detection in an out-of-type configuration where models are evaluated in unseen event types during training. These results are encouraging and constitute a first step towards SA-aware disaster management in social media. | null | null | 10.18653/v1/2022.starsem-1.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,607 |
inproceedings | piccirilli-schulte-im-walde-2022-drives | What Drives the Use of Metaphorical Language? Negative Insights from Abstractness, Affect, Discourse Coherence and Contextualized Word Representations | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.26/ | Piccirilli, Prisca and Schulte Im Walde, Sabine | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 299--310 | Given a specific discourse, which discourse properties trigger the use of metaphorical language, rather than using literal alternatives? For example, what drives people to say grasp the meaning rather than understand the meaning within a specific context? Many NLP approaches to metaphorical language rely on cognitive and (psycho-)linguistic insights and have successfully defined models of discourse coherence, abstractness and affect. In this work, we build five simple models relying on established cognitive and linguistic properties ? frequency, abstractness, affect, discourse coherence and contextualized word representations ? to predict the use of a metaphorical vs. synonymous literal expression in context. By comparing the models? outputs to human judgments, our study indicates that our selected properties are not sufficient to systematically explain metaphorical vs. literal language choices. | null | null | 10.18653/v1/2022.starsem-1.26 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,608 |
inproceedings | wu-huang-2022-unsupervised | Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.27/ | Wu, Yuexin and Huang, Xiaolei | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 311--322 | Class imbalance naturally exists when label distributions are not aligned across source and target domains. However, existing state-of-the-art UDA models learn domain-invariant representations across domains and evaluate primarily on class-balanced data. In this work, we propose an unsupervised domain adaptation approach via reinforcement learning that jointly leverages feature variants and imbalanced labels across domains. We experiment with the text classification task for its easily accessible datasets and compare the proposed method with five baselines. Experiments on three datasets prove that our proposed method can effectively learn robust domain-invariant representations and successfully adapt text classifiers on imbalanced classes over domains. | null | null | 10.18653/v1/2022.starsem-1.27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,609 |
inproceedings | man-etal-2022-event | Event Causality Identification via Generation of Important Context Words | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.28/ | Man, Hieu and Nguyen, Minh and Nguyen, Thien | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 323--330 | An important problem of Information Extraction involves Event Causality Identification (ECI) that seeks to identify causal relation between pairs of event mentions. Prior models for ECI have mainly solved the problem using the classification framework that does not explore prediction/generation of important context words from input sentences for causal recognition. In this work, we consider the words along the dependency path between the two event mentions in the dependency tree as the important context words for ECI. We introduce dependency path generation as a complementary task for ECI, which can be solved jointly with causal label prediction to improve the performance. To facilitate the multi-task learning, we cast ECI into a generation problem that aims to generate both causal relation and dependency path words from input sentence. In addition, we propose to use the REINFORCE algorithm to train our generative model where novel reward functions are designed to capture both causal prediction accuracy and generation quality. The experiments on two benchmark datasets demonstrate state-of-the-art performance of the proposed model for ECI. | null | null | 10.18653/v1/2022.starsem-1.28 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,610 |
inproceedings | qi-etal-2022-capturing | Capturing the Content of a Document through Complex Event Identification | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.29/ | Qi, Zheng and Sulem, Elior and Wang, Haoyu and Yu, Xiaodong and Roth, Dan | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 331--340 | Granular events, instantiated in a document by predicates, can usually be grouped into more general events, called complex events. Together, they capture the major content of the document. Recent work grouped granular events by defining event regions, filtering out sentences that are irrelevant to the main content. However, this approach assumes that a given complex event is always described in consecutive sentences, which does not always hold in practice. In this paper, we introduce the task of complex event identification. We address this task as a pipeline, first predicting whether two granular events mentioned in the text belong to the same complex event, independently of their position in the text, and then using this to cluster them into complex events. Due to the difficulty of predicting whether two granular events belong to the same complex event in isolation, we propose a context-augmented representation learning approach CONTEXTRL that adds additional context to better model the pairwise relation between granular events. We show that our approach outperforms strong baselines on the complex event identification task and further present a promising case study exploring the effectiveness of using complex events as input for document-level argument extraction. | null | null | 10.18653/v1/2022.starsem-1.29 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,611 |
inproceedings | xu-choi-2022-online | Online Coreference Resolution for Dialogue Processing: Improving Mention-Linking on Real-Time Conversations | Nastase, Vivi and Pavlick, Ellie and Pilehvar, Mohammad Taher and Camacho-Collados, Jose and Raganato, Alessandro | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.starsem-1.30/ | Xu, Liyan and Choi, Jinho D. | Proceedings of the 11th Joint Conference on Lexical and Computational Semantics | 341--347 | This paper suggests a direction of coreference resolution for online decoding on actively generated input such as dialogue, where the model accepts an utterance and its past context, then finds mentions in the current utterance as well as their referents, upon each dialogue turn. A baseline and four incremental updated models adapted from the mention linking paradigm are proposed for this new setting, which address different aspects including the singletons, speaker-grounded encoding and cross-turn mention contextualization. Our approach is assessed on three datasets: Friends, OntoNotes, and BOLT. Results show that each aspect brings out steady improvement, and our best models outperform the baseline by over 10{\%}, presenting an effective system for this setting. Further analysis highlights the task characteristics, such as the significance of addressing the mention recall. | null | null | 10.18653/v1/2022.starsem-1.30 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,612 |
inproceedings | kando-etal-2022-multilingual | Multilingual Syntax-aware Language Modeling through Dependency Tree Conversion | Vlachos, Andreas and Agrawal, Priyanka and Martins, Andr{\'e} and Lampouras, Gerasimos and Lyu, Chunchuan | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.spnlp-1.1/ | Kando, Shunsuke and Noji, Hiroshi and Miyao, Yusuke | Proceedings of the Sixth Workshop on Structured Prediction for NLP | 1--10 | Incorporating stronger syntactic biases into neural language models (LMs) is a long-standing goal, but research in this area often focuses on modeling English text, where constituent treebanks are readily available. Extending constituent tree-based LMs to the multilingual setting, where dependency treebanks are more common, is possible via dependency-to-constituency conversion methods. However, this raises the question of which tree formats are best for learning the model, and for which languages. We investigate this question by training recurrent neural network grammars (RNNGs) using various conversion methods, and evaluating them empirically in a multilingual setting. We examine the effect on LM performance across nine conversion methods and five languages through seven types of syntactic tests. On average, the performance of our best model represents a 19 {\%} increase in accuracy over the worst choice across all languages. Our best model shows the advantage over sequential/overparameterized LMs, suggesting the positive effect of syntax injection in a multilingual setting. Our experiments highlight the importance of choosing the right tree formalism, and provide insights into making an informed decision. | null | null | 10.18653/v1/2022.spnlp-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,614 |
inproceedings | ma-etal-2022-joint | Joint Entity and Relation Extraction Based on Table Labeling Using Convolutional Neural Networks | Vlachos, Andreas and Agrawal, Priyanka and Martins, Andr{\'e} and Lampouras, Gerasimos and Lyu, Chunchuan | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.spnlp-1.2/ | Ma, Youmi and Hiraoka, Tatsuya and Okazaki, Naoaki | Proceedings of the Sixth Workshop on Structured Prediction for NLP | 11--21 | This study introduces a novel approach to the joint extraction of entities and relations by stacking convolutional neural networks (CNNs) on pretrained language models. We adopt table representations to model the entities and relations, casting the entity and relation extraction as a table-labeling problem. Regarding each table as an image and each cell in a table as an image pixel, we apply two-dimensional CNNs to the tables to capture local dependencies and predict the cell labels. The experimental results showed that the performance of the proposed method is comparable to those of current state-of-art systems on the CoNLL04, ACE05, and ADE datasets. Even when freezing pretrained language model parameters, the proposed method showed a stable performance, whereas the compared methods suffered from significant decreases in performance. This observation indicates that the parameters of the pretrained encoder may incorporate dependencies among the entity and relation labels during fine-tuning. | null | null | 10.18653/v1/2022.spnlp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,615 |
inproceedings | fu-etal-2022-tempcaps | {T}emp{C}aps: A Capsule Network-based Embedding Model for Temporal Knowledge Graph Completion | Vlachos, Andreas and Agrawal, Priyanka and Martins, Andr{\'e} and Lampouras, Gerasimos and Lyu, Chunchuan | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.spnlp-1.3/ | Fu, Guirong and Meng, Zhao and Han, Zhen and Ding, Zifeng and Ma, Yunpu and Schubert, Matthias and Tresp, Volker and Wattenhofer, Roger | Proceedings of the Sixth Workshop on Structured Prediction for NLP | 22--31 | Temporal knowledge graphs store the dynamics of entities and relations during a time period. However, typical temporal knowledge graphs often suffer from incomplete dynamics with missing facts in real-world scenarios. Hence, modeling temporal knowledge graphs to complete the missing facts is important. In this paper, we tackle the temporal knowledge graph completion task by proposing \textbf{TempCaps}, which is a \textbf{Caps}ule network-based embedding model for \textbf{Temp}oral knowledge graph completion. TempCaps models temporal knowledge graphs by introducing a novel dynamic routing aggregator inspired by Capsule Networks. Specifically, TempCaps builds entity embeddings by dynamically routing retrieved temporal relation and neighbor information. Experimental results demonstrate that TempCaps reaches state-of-the-art performance for temporal knowledge graph completion. Additional analysis also shows that TempCaps is efficient. | null | null | 10.18653/v1/2022.spnlp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,616 |
inproceedings | daza-etal-2022-slotgan | {S}lot{GAN}: Detecting Mentions in Text via Adversarial Distant Learning | Vlachos, Andreas and Agrawal, Priyanka and Martins, Andr{\'e} and Lampouras, Gerasimos and Lyu, Chunchuan | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.spnlp-1.4/ | Daza, Daniel and Cochez, Michael and Groth, Paul | Proceedings of the Sixth Workshop on Structured Prediction for NLP | 32--39 | We present SlotGAN, a framework for training a mention detection model that only requires unlabeled text and a gazetteer. It consists of a generator trained to extract spans from an input sentence, and a discriminator trained to determine whether a span comes from the generator, or from the gazetteer. We evaluate the method on English newswire data and compare it against supervised, weakly-supervised, and unsupervised methods. We find that the performance of the method is lower than these baselines, because it tends to generate more and longer spans, and in some cases it relies only on capitalization. In other cases, it generates spans that are valid but differ from the benchmark. When evaluated with metrics based on overlap, we find that SlotGAN performs within 95{\%} of the precision of a supervised method, and 84{\%} of its recall. Our results suggest that the model can generate spans that overlap well, but an additional filtering mechanism is required. | null | null | 10.18653/v1/2022.spnlp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,617 |
inproceedings | chiu-etal-2022-joint | A Joint Learning Approach for Semi-supervised Neural Topic Modeling | Vlachos, Andreas and Agrawal, Priyanka and Martins, Andr{\'e} and Lampouras, Gerasimos and Lyu, Chunchuan | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.spnlp-1.5/ | Chiu, Jeffrey and Mittal, Rajat and Tumma, Neehal and Sharma, Abhishek and Doshi-Velez, Finale | Proceedings of the Sixth Workshop on Structured Prediction for NLP | 40--51 | Topic models are some of the most popular ways to represent textual data in an interpret-able manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semi-supervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies. | null | null | 10.18653/v1/2022.spnlp-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,618 |
inproceedings | libovicky-fraser-2022-neural | Neural String Edit Distance | Vlachos, Andreas and Agrawal, Priyanka and Martins, Andr{\'e} and Lampouras, Gerasimos and Lyu, Chunchuan | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.spnlp-1.6/ | Libovick{\'y}, Jind{\v{r}}ich and Fraser, Alexander | Proceedings of the Sixth Workshop on Structured Prediction for NLP | 52--66 | We propose the neural string edit distance model for string-pair matching and string transduction based on learnable string edit distance. We modify the original expectation-maximization learned edit distance algorithm into a differentiable loss function, allowing us to integrate it into a neural network providing a contextual representation of the input. We evaluate on cognate detection, transliteration, and grapheme-to-phoneme conversion, and show that we can trade off between performance and interpretability in a single framework. Using contextual representations, which are difficult to interpret, we match the performance of state-of-the-art string-pair matching models. Using static embeddings and a slightly different loss function, we force interpretability, at the expense of an accuracy drop. | null | null | 10.18653/v1/2022.spnlp-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,619 |
inproceedings | treviso-etal-2022-predicting | Predicting Attention Sparsity in Transformers | Vlachos, Andreas and Agrawal, Priyanka and Martins, Andr{\'e} and Lampouras, Gerasimos and Lyu, Chunchuan | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.spnlp-1.7/ | Treviso, Marcos and G{\'o}is, Ant{\'o}nio and Fernandes, Patrick and Fonseca, Erick and Martins, Andre | Proceedings of the Sixth Workshop on Structured Prediction for NLP | 67--81 | Transformers' quadratic complexity with respect to the input sequence length has motivated a body of work on efficient sparse approximations to softmax. An alternative path, used by entmax transformers, consists of having built-in exact sparse attention; however this approach still requires quadratic computation. In this paper, we propose Sparsefinder, a simple model trained to identify the sparsity pattern of entmax attention before computing it. We experiment with three variants of our method, based on distances, quantization, and clustering, on two tasks: machine translation (attention in the decoder) and masked language modeling (encoder-only). Our work provides a new angle to study model efficiency by doing extensive analysis of the tradeoff between the sparsity and recall of the predicted attention graph. This allows for detailed comparison between different models along their Pareto curves, important to guide future benchmarks for sparse attention models. | null | null | 10.18653/v1/2022.spnlp-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,620 |
inproceedings | tran-etal-2022-improving | Improving Discriminative Learning for Zero-Shot Relation Extraction | Das, Rajarshi and Lewis, Patrick and Min, Sewon and Thai, June and Zaheer, Manzil | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.spanlp-1.1/ | Tran, Van-Hien and Ouchi, Hiroki and Watanabe, Taro and Matsumoto, Yuji | Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge | 1--6 | Zero-shot relation extraction (ZSRE) aims to predict target relations that cannot be observed during training. While most previous studies have focused on fully supervised relation extraction and achieved considerably high performance, less effort has been made towards ZSRE. This study proposes a new model incorporating discriminative embedding learning for both sentences and semantic relations. In addition, a self-adaptive comparator network is used to judge whether the relationship between a sentence and a relation is consistent. Experimental results on two benchmark datasets showed that the proposed method significantly outperforms the state-of-the-art methods. | null | null | 10.18653/v1/2022.spanlp-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,622 |
inproceedings | luo-etal-2022-choose | Choose Your {QA} Model Wisely: A Systematic Study of Generative and Extractive Readers for Question Answering | Das, Rajarshi and Lewis, Patrick and Min, Sewon and Thai, June and Zaheer, Manzil | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.spanlp-1.2/ | Luo, Man and Hashimoto, Kazuma and Yavuz, Semih and Liu, Zhiwei and Baral, Chitta and Zhou, Yingbo | Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge | 7--22 | While both extractive and generative readers have been successfully applied to the Question Answering (QA) task, little attention has been paid toward the systematic comparison of them. Characterizing the strengths and weaknesses of the two readers is crucial not only for making a more informed reader selection in practice but also for developing a deeper understanding to foster further research on improving readers in a principled manner. Motivated by this goal, we make the first attempt to systematically study the comparison of extractive and generative readers for question answering. To be aligned with the state-of-the-art, we explore nine transformer-based large pre-trained language models (PrLMs) as backbone architectures. Furthermore, we organize our findings under two main categories: (1) keeping the architecture invariant, and (2) varying the underlying PrLMs. Among several interesting findings, it is important to highlight that (1) the generative readers perform better in long context QA, (2) the extractive readers perform better in short context while also showing better out-of-domain generalization, and (3) the encoder of encoder-decoder PrLMs (e.g., T5) turns out to be a strong extractive reader and outperforms the standard choice of encoder-only PrLMs (e.g., RoBERTa). We also study the effect of multi-task learning on the two types of readers varying the underlying PrLMs and perform qualitative and quantitative diagnosis to provide further insights into future directions in modeling better readers. | null | null | 10.18653/v1/2022.spanlp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,623 |
inproceedings | martins-etal-2022-efficient | Efficient Machine Translation Domain Adaptation | Das, Rajarshi and Lewis, Patrick and Min, Sewon and Thai, June and Zaheer, Manzil | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.spanlp-1.3/ | Martins, Pedro and Marinho, Zita and Martins, Andre | Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge | 23--29 | Machine translation models struggle when translating out-of-domain text, which makes domain adaptation a topic of critical importance. However, most domain adaptation methods focus on fine-tuning or training the entire or part of the model on every new domain, which can be costly. On the other hand, semi-parametric models have been shown to successfully perform domain adaptation by retrieving examples from an in-domain datastore (Khandelwal et al., 2021). A drawback of these retrieval-augmented models, however, is that they tend to be substantially slower. In this paper, we explore several approaches to speed up nearest neighbors machine translation. We adapt the methods recently proposed by He et al. (2021) for language modeling, and introduce a simple but effective caching strategy that avoids performing retrieval when similar contexts have been seen before. Translation quality and runtimes for several domains show the effectiveness of the proposed solutions. | null | null | 10.18653/v1/2022.spanlp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,624 |
inproceedings | gao-etal-2022-field | Field Extraction from Forms with Unlabeled Data | Das, Rajarshi and Lewis, Patrick and Min, Sewon and Thai, June and Zaheer, Manzil | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.spanlp-1.4/ | Gao, Mingfei and Chen, Zeyuan and Naik, Nikhil and Hashimoto, Kazuma and Xiong, Caiming and Xu, Ran | Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge | 30--40 | We propose a novel framework to conduct field extraction from forms with unlabeled data. To bootstrap the training process, we develop a rule-based method for mining noisy pseudo-labels from unlabeled forms. Using the supervisory signal from the pseudo-labels, we extract a discriminative token representation from a transformer-based model by modeling the interaction between text in the form. To prevent the model from overfitting to label noise, we introduce a refinement module based on a progressive pseudo-label ensemble. Experimental results demonstrate the effectiveness of our framework. | null | null | 10.18653/v1/2022.spanlp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,625 |
inproceedings | zouhar-etal-2022-knowledge | Knowledge Base Index Compression via Dimensionality and Precision Reduction | Das, Rajarshi and Lewis, Patrick and Min, Sewon and Thai, June and Zaheer, Manzil | may | 2022 | Dublin, Ireland and Online | Association for Computational Linguistics | https://aclanthology.org/2022.spanlp-1.5/ | Zouhar, Vil{\'e}m and Mosbach, Marius and Zhang, Miaoran and Klakow, Dietrich | Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge | 41--53 | Recently neural network based approaches to knowledge-intensive NLP tasks, such as question answering, started to rely heavily on the combination of neural retrievers and readers. Retrieval is typically performed over a large textual knowledge base (KB) which requires significant memory and compute resources, especially when scaled up. On HotpotQA we systematically investigate reducing the size of the KB index by means of dimensionality (sparse random projections, PCA, autoencoders) and numerical precision reduction. Our results show that PCA is an easy solution that requires very little data and is only slightly worse than autoencoders, which are less stable. All methods are sensitive to pre- and post-processing and data should always be centered and normalized both before and after dimension reduction. Finally, we show that it is possible to combine PCA with using 1bit per dimension. Overall we achieve (1) 100$\times$ compression with 75{\%}, and (2) 24$\times$ compression with 92{\%} original retrieval performance. | null | null | 10.18653/v1/2022.spanlp-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,626 |
inproceedings | yang-2022-mask | Mask and Regenerate: A Classifier-based Approach for Unpaired Sentiment Transformation of Reviews for Electronic Commerce Websites. | Ku, Lun-Wei and Li, Cheng-Te and Tsai, Yu-Che and Wang, Wei-Yao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.socialnlp-1.1/ | Yang, Shuo | Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media | 1--10 | Style transfer is the task of transferring a sentence into the target style while keeping its content. The major challenge is that parallel corpora are not available for various domains. In this paper, we propose a Mask-And-Regenerate approach (MAR). It learns from unpaired sentences by modifying the word-level style attributes. We cautiously integrate the deletion, insertion and substitution operations into our model. This enables our model to automatically apply different edit operations for different sentences. Specifically, we train a multilayer perceptron (MLP) as a style classifier to find out and mask style-characteristic words in the source inputs. Then we learn a language model on non-parallel data sets to score sentences and remove unnecessary masks. Finally, the masked source sentences are input to a Transformer to perform style transfer. The final results show that our proposed model exceeds baselines by about 2 per cent of accuracy for both sentiment and style transfer tasks with comparable or better content retention. | null | null | 10.18653/v1/2022.socialnlp-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,628 |
inproceedings | ruiter-etal-2022-exploiting | Exploiting Social Media Content for Self-Supervised Style Transfer | Ku, Lun-Wei and Li, Cheng-Te and Tsai, Yu-Che and Wang, Wei-Yao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.socialnlp-1.2/ | Ruiter, Dana and Kleinbauer, Thomas and Espa{\~n}a-Bonet, Cristina and van Genabith, Josef and Klakow, Dietrich | Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media | 11--34 | Recent research on style transfer takes inspiration from unsupervised neural machine translation (UNMT), learning from large amounts of non-parallel data by exploiting cycle consistency loss, back-translation, and denoising autoencoders. By contrast, the use of selfsupervised NMT (SSNMT), which leverages (near) parallel instances hidden in non-parallel data more efficiently than UNMT, has not yet been explored for style transfer. In this paper we present a novel Self-Supervised Style Transfer (3ST) model, which augments SSNMT with UNMT methods in order to identify and efficiently exploit supervisory signals in non-parallel social media posts. We compare 3ST with state-of-the-art (SOTA) style transfer models across civil rephrasing, formality and polarity tasks. We show that 3ST is able to balance the three major objectives (fluency, content preservation, attribute transfer accuracy) the best, outperforming SOTA models on averaged performance across their tested tasks in automatic and human evaluation. | null | null | 10.18653/v1/2022.socialnlp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,629 |
inproceedings | kim-yoon-2022-detecting | Detecting Rumor Veracity with Only Textual Information by Double-Channel Structure | Ku, Lun-Wei and Li, Cheng-Te and Tsai, Yu-Che and Wang, Wei-Yao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.socialnlp-1.3/ | Kim, Alex Gunwoo and Yoon, Sangwon | Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media | 35--44 | Kyle (1985) proposes two types of rumors: informed rumors which are based on some private information and uninformed rumors which are not based on any information (i.e. bluffing). Also, prior studies find that when people have credible source of information, they are likely to use a more confident textual tone in their spreading of rumors. Motivated by these theoretical findings, we propose a double-channel structure to determine the ex-ante veracity of rumors on social media. Our ultimate goal is to classify each rumor into true, false, or unverifiable category. We first assign each text into either certain (informed rumor) or uncertain (uninformed rumor) category. Then, we apply lie detection algorithm to informed rumors and thread-reply agreement detection algorithm to uninformed rumors. Using the dataset of SemEval 2019 Task 7, which requires ex-ante threefold classification (true, false, or unverifiable) of social media rumors, our model yields a macro-F1 score of 0.4027, outperforming all the baseline models and the second-place winner (Gorrell et al., 2019). Furthermore, we empirically validate that the double-channel structure outperforms single-channel structures which use either lie detection or agreement detection algorithm to all posts. | null | null | 10.18653/v1/2022.socialnlp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,630 |
inproceedings | goel-sharma-2022-leveraging | Leveraging Dependency Grammar for Fine-Grained Offensive Language Detection using Graph Convolutional Networks | Ku, Lun-Wei and Li, Cheng-Te and Tsai, Yu-Che and Wang, Wei-Yao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.socialnlp-1.4/ | Goel, Divyam and Sharma, Raksha | Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media | 45--54 | The last few years have witnessed an exponential rise in the propagation of offensive text on social media. Identification of this text with high precision is crucial for the well-being of society. Most of the existing approaches tend to give high toxicity scores to innocuous statements (e.g., {\textquotedblleft}I am a gay man{\textquotedblright}). These false positives result from over-generalization on the training data where specific terms in the statement may have been used in a pejorative sense (e.g., {\textquotedblleft}gay{\textquotedblright}). Emphasis on such words alone can lead to discrimination against the classes these systems are designed to protect. In this paper, we address the problem of offensive language detection on Twitter, while also detecting the type and the target of the offense. We propose a novel approach called SyLSTM, which integrates syntactic features in the form of the dependency parse tree of a sentence and semantic features in the form of word embeddings into a deep learning architecture using a Graph Convolutional Network. Results show that the proposed approach significantly outperforms the state-of-the-art BERT model with orders of magnitude fewer number of parameters. | null | null | 10.18653/v1/2022.socialnlp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,631 |
inproceedings | elsafoury-etal-2022-comparative | A Comparative Study on Word Embeddings and Social {NLP} Tasks | Ku, Lun-Wei and Li, Cheng-Te and Tsai, Yu-Che and Wang, Wei-Yao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.socialnlp-1.5/ | Elsafoury, Fatma and Wilson, Steven R. and Ramzan, Naeem | Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media | 55--64 | In recent years, gray social media platforms, those with a loose moderation policy on cyberbullying, have been attracting more users. Recently, data collected from these types of platforms have been used to pre-train word embeddings (social-media-based), yet these word embeddings have not been investigated for social NLP related tasks. In this paper, we carried out a comparative study between social-media-based and non-social-media-based word embeddings on two social NLP tasks: Detecting cyberbullying and Measuring social bias. Our results show that using social-media-based word embeddings as input features, rather than non-social-media-based embeddings, leads to better cyberbullying detection performance. We also show that some word embeddings are more useful than others for categorizing offensive words. However, we do not find strong evidence that certain word embeddings will necessarily work best when identifying certain categories of cyberbullying within our datasets. Finally, We show even though most of the state-of-the-art bias metrics ranked social-media-based word embeddings as the most socially biased, these results remain inconclusive and further research is required. | null | null | 10.18653/v1/2022.socialnlp-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,632 |
inproceedings | rai-etal-2022-identifying | Identifying Human Needs through Social Media: A study on {I}ndian cities during {COVID}-19 | Ku, Lun-Wei and Li, Cheng-Te and Tsai, Yu-Che and Wang, Wei-Yao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.socialnlp-1.6/ | Rai, Sunny and Joseph, Rohan and Thakur, Prakruti Singh and Khaliq, Mohammed Abdul | Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media | 65--74 | In this paper, we present a minimally-supervised approach to identify human needs expressed in tweets. Taking inspiration from Frustration-Aggression theory, we trained RoBERTa model to classify tweets expressing frustration which serves as an indicator of unmet needs. Although the notion of frustration is highly subjective and complex, the findings support the use of pretrained language model in identifying tweets with unmet needs. Our study reveals the major causes behind feeling frustrated during the lockdown and the second wave of the COVID-19 pandemic in India. Our proposed approach can be useful in timely identification and prioritization of emerging human needs in the event of a crisis. | null | null | 10.18653/v1/2022.socialnlp-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,633 |
inproceedings | upadhyay-etal-2022-towards | Towards Toxic Positivity Detection | Ku, Lun-Wei and Li, Cheng-Te and Tsai, Yu-Che and Wang, Wei-Yao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.socialnlp-1.7/ | Upadhyay, Ishan Sanjeev and Srivatsa, KV Aditya and Mamidi, Radhika | Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media | 75--82 | Over the past few years, there has been a growing concern around toxic positivity on social media which is a phenomenon where positivity is used to minimize one`s emotional experience. In this paper, we create a dataset for toxic positivity classification from Twitter and an inspirational quote website. We then perform benchmarking experiments using various text classification models and show the suitability of these models for the task. We achieved a macro F1 score of 0.71 and a weighted F1 score of 0.85 by using an ensemble model. To the best of our knowledge, our dataset is the first such dataset created. | null | null | 10.18653/v1/2022.socialnlp-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,634 |
inproceedings | geiss-etal-2022-ok | {OK} Boomer: Probing the socio-demographic Divide in Echo Chambers | Ku, Lun-Wei and Li, Cheng-Te and Tsai, Yu-Che and Wang, Wei-Yao | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.socialnlp-1.8/ | Geiss, Henri-Jacques and Sakketou, Flora and Flek, Lucie | Proceedings of the Tenth International Workshop on Natural Language Processing for Social Media | 83--105 | Social media platforms such as Twitter or Reddit have become an integral part in political opinion formation and discussions, accompanied by potential echo chamber forming. In this paper, we examine the relationships between the interaction patterns, the opinion polarity, and the socio-demographic characteristics in discussion communities on Reddit. On a dataset of over 2 million posts coming from over 20k users, we combine network community detection algorithms, reliable stance polarity annotations, and NLP-based socio-demographic estimations, to identify echo chambers and understand their properties at scale. We show that the separability of the interaction communities is more strongly correlated to the relative socio-demographic divide, rather than the stance polarity gap size. We further demonstrate that the socio-demographic classifiers have a strong topical bias and should be used with caution, merely for the relative community difference comparisons within a topic, rather than for any absolute labeling. | null | null | 10.18653/v1/2022.socialnlp-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,635 |
inproceedings | avram-etal-2022-racai | {RACAI}@{SMM}4{H}`22: Tweets Disease Mention Detection Using a Neural Lateral Inhibitory Mechanism | Gonzalez-Hernandez, Graciela and Weissenbacher, Davy | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.smm4h-1.1/ | Avram, Andrei-Marius and Pais, Vasile and Mitrofan, Maria | Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop {\&} Shared Task | 1--3 | This paper presents our system employed for the Social Media Mining for Health (SMM4H) 2022 competition Task 10 - SocialDisNER. The goal of the task was to improve the detection of diseases in tweets. Because the tweets were in Spanish, we approached this problem using a system that relies on a pre-trained multilingual model and is fine-tuned using the recently introduced lateral inhibition layer. We further experimented on this task by employing a conditional random field on top of the system and using a voting-based ensemble that contains various architectures. The evaluation results outlined that our best performing model obtained 83.7{\%} F1-strict on the validation set and 82.1{\%} F1-strict on the test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,637 |
inproceedings | liu-etal-2022-pingantech | {P}ing{A}n{T}ech at {SMM}4{H} task1: Multiple pre-trained model approaches for Adverse Drug Reactions | Gonzalez-Hernandez, Graciela and Weissenbacher, Davy | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.smm4h-1.2/ | Liu, Xi and Zhou, Han and Su, Chang | Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop {\&} Shared Task | 4--6 | This paper describes the solution for the Social Media Mining for Health (SMM4H) 2022 Shared Task. We participated in Task1a., Task1b. and Task1c. To solve the problem of the presence of Twitter data, we used a pre-trained language model. We used training strategies that involved: adversarial training, head layer weighted fusion, etc., to improve the performance of the model. The experimental results show the effectiveness of our designed system. For task 1a, the system achieved an F1 score of 0.68; for task 1b Overlapping F1 score of 0.65 and a Strict F1 score of 0.49. Task 1c yields Overlapping F1 and Strict F1 scores of 0.36 and 0.30, respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,638 |
inproceedings | ortega-martin-etal-2022-dezzai | dezzai@{SMM}4{H}`22: Tasks 5 {\&} 10 - Hybrid models everywhere | Gonzalez-Hernandez, Graciela and Weissenbacher, Davy | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.smm4h-1.3/ | Ortega-Mart{\'i}n, Miguel and Ardoiz, Alfonso and Garcia, Oscar and {\'A}lvarez, Jorge and Alonso, Adri{\'a}n | Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop {\&} Shared Task | 7--10 | This paper presents our approaches to SMM4H`22 task 5 - Classification of tweets of self-reported COVID-19 symptoms in Spanish, and task 10 - Detection of disease mentions in tweets {--} SocialDisNER (in Spanish). We have presented hybrid systems that combine Deep Learning techniques with linguistic rules and medical ontologies, which have allowed us to achieve outstanding results in both tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,639 |
inproceedings | huang-etal-2022-zydhjh4593 | zydhjh4593@{SMM}4{H}`22: A Generic Pre-trained {BERT}-based Framework for Social Media Health Text Classification | Gonzalez-Hernandez, Graciela and Weissenbacher, Davy | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.smm4h-1.4/ | Huang, Chenghao and Chen, Xiaolu and Chen, Yuxi and Wu, Yutong and Yuan, Weimin and Wang, Yan and Zhang, Yanru | Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop {\&} Shared Task | 11--15 | This paper describes our proposed framework for the 10 text classification tasks of Task 1a, 2a, 2b, 3a, 4, 5, 6, 7, 8, and 9, in the Social Media Mining for Health (SMM4H) 2022. According to the pre-trained BERT-based models, various techniques, including regularized dropout, focal loss, exponential moving average, 5-fold cross-validation, ensemble prediction, and pseudo-labeling, are applied for further formulating and improving the generalization performance of our framework. In the evaluation, the proposed framework achieves the 1st place in Task 3a with a 7{\%} higher F1-score than the median, and obtains a 4{\%} higher averaged F1-score than the median in all participating tasks except Task 1a. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,640 |
inproceedings | zanwar-etal-2022-mantis | {MANTIS} at {SMM}4{H}`2022: Pre-Trained Language Models Meet a Suite of Psycholinguistic Features for the Detection of Self-Reported Chronic Stress | Gonzalez-Hernandez, Graciela and Weissenbacher, Davy | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.smm4h-1.5/ | Zanwar, Sourabh and Wiechmann, Daniel and Qiao, Yu and Kerz, Elma | Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop {\&} Shared Task | 16--18 | This paper describes our submission to Social Media Mining for Health (SMM4H) 2022 Shared Task 8, aimed at detecting self-reported chronic stress on Twitter. Our approach leverages a pre-trained transformer model (RoBERTa) in combination with a Bidirectional Long Short-Term Memory (BiLSTM) network trained on a diverse set of psycholinguistic features. We handle the class imbalance issue in the training dataset by augmenting it by another dataset used for stress classification in social media. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,641 |
inproceedings | tamayo-etal-2022-nlp | {NLP}-{CIC}-{WFU} at {S}ocial{D}is{NER}: Disease Mention Extraction in {S}panish Tweets Using Transfer Learning and Search by Propagation | Gonzalez-Hernandez, Graciela and Weissenbacher, Davy | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.smm4h-1.6/ | Tamayo, Antonio and Gelbukh, Alexander and Burgos, Diego | Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop {\&} Shared Task | 19--22 | Named entity recognition (e.g., disease mention extraction) is one of the most relevant tasks for data mining in the medical field. Although it is a well-known challenge, the bulk of the efforts to tackle this task have been made using clinical texts commonly written in English. In this work, we present our contribution to the SocialDisNER competition, which consists of a transfer learning approach to extracting disease mentions in a corpus from Twitter written in Spanish. We fine-tuned a model based on mBERT and applied post-processing using regular expressions to propagate the entities identified by the model and enhance disease mention extraction. Our system achieved a competitive strict F1 of 0.851 on the testing data set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 22,642 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.