entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | segarra-soriano-etal-2022-dacsa | {DACSA}: A large-scale Dataset for Automatic summarization of {C}atalan and {S}panish newspaper Articles | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-main.434/ | Segarra Soriano, Encarnaci{\'o}n and Ahuir, Vicent and Hurtado, Llu{\'i}s-F. and Gonz{\'a}lez, Jos{\'e} | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies | 5931--5943 | The application of supervised methods to automatic summarization requires the availability of adequate corpora consisting of a set of document-summary pairs. As in most Natural Language Processing tasks, the great majority of available datasets for summarization are in English, making it difficult to develop automatic summarization models for other languages. Although Spanish is gradually forming part of some recent summarization corpora, it is not the same for minority languages such as Catalan. In this work, we describe the construction of a corpus of Catalan and Spanish newspapers, the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA) corpus. It is a high-quality large-scale corpus that can be used to train summarization models for Catalan and Spanish.We have carried out an analysis of the corpus, both in terms of the style of the summaries and the difficulty of the summarization task. In particular, we have used a set of well-known metrics in the summarization field in order to characterize the corpus. Additionally, for benchmarking purposes, we have evaluated the performances of some extractive and abstractive summarization systems on the DACSA corpus. | null | null | 10.18653/v1/2022.naacl-main.434 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,133 |
inproceedings | luu-etal-2022-time | Time Waits for No One! Analysis and Challenges of Temporal Misalignment | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-main.435/ | Luu, Kelvin and Khashabi, Daniel and Gururangan, Suchin and Mandyam, Karishma and Smith, Noah A. | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies | 5944--5958 | When an NLP model is trained on text data from one time period and tested or deployed on data from another, the resulting temporal misalignment can degrade end-task performance. In this work, we establish a suite of eight diverse tasks across different domains (social media, science papers, news, and reviews) and periods of time (spanning five years or more) to quantify the effects of temporal misalignment. Our study is focused on the ubiquitous setting where a pretrained model is optionally adapted through continued domain-specific pretraining, followed by task-specific finetuning. We establish a suite of tasks across multiple domains to study temporal misalignment in modern NLP systems. We find stronger effects of temporal misalignment on task performance than have been previously reported. We also find that, while temporal adaptation through continued pretraining can help, these gains are small compared to task-specific finetuning on data from the target time period. Our findings motivate continued research to improve temporal robustness of NLP models. | null | null | 10.18653/v1/2022.naacl-main.435 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,134 |
inproceedings | zhang-etal-2022-mcse | {MCSE}: {M}ultimodal Contrastive Learning of Sentence Embeddings | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-main.436/ | Zhang, Miaoran and Mosbach, Marius and Adelani, David Ifeoluwa and Hedderich, Michael A. and Klakow, Dietrich | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies | 5959--5969 | Learning semantically meaningful sentence embeddings is an open problem in natural language processing. In this work, we propose a sentence embedding learning approach that exploits both visual and textual information via a multimodal contrastive objective. Through experiments on a variety of semantic textual similarity tasks, we demonstrate that our approach consistently improves the performance across various datasets and pre-trained encoders. In particular, combining a small amount of multimodal data with a large text-only corpus, we improve the state-of-the-art average Spearman`s correlation by 1.7{\%}. By analyzing the properties of the textual embedding space, we show that our model excels in aligning semantically similar sentences, providing an explanation for its improved performance. | null | null | 10.18653/v1/2022.naacl-main.436 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,135 |
inproceedings | liu-etal-2022-hiure | {H}i{URE}: Hierarchical Exemplar Contrastive Learning for Unsupervised Relation Extraction | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-main.437/ | Liu, Shuliang and Hu, Xuming and Zhang, Chenwei and Li, Shu{'}ang and Wen, Lijie and Yu, Philip | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies | 5970--5980 | Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution. Existing works either utilize self-supervised schemes to refine relational feature signals by iteratively leveraging adaptive clustering and classification that provoke gradual drift problems, or adopt instance-wise contrastive learning which unreasonably pushes apart those sentence pairs that are semantically similar. To overcome these defects, we propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention and effectively optimize relation representation of sentences under exemplar-wise contrastive learning. Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models. | null | null | 10.18653/v1/2022.naacl-main.437 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,136 |
inproceedings | zhu-etal-2022-diagnosing | Diagnosing Vision-and-Language Navigation: What Really Matters | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-main.438/ | Zhu, Wanrong and Qi, Yuankai and Narayana, Pradyumna and Sone, Kazoo and Basu, Sugato and Wang, Xin and Wu, Qi and Eckstein, Miguel and Wang, William Yang | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies | 5981--5993 | Vision-and-language navigation (VLN) is a multimodal task where an agent follows natural language instructions and navigates in visual environments. Multiple setups have been proposed, and researchers apply new model architectures or training techniques to boost navigation performance. However, there still exist non-negligible gaps between machines' performance and human benchmarks. Moreover, the agents' inner mechanisms for navigation decisions remain unclear. To the best of our knowledge, how the agents perceive the multimodal input is under-studied and needs investigation. In this work, we conduct a series of diagnostic experiments to unveil agents' focus during navigation. Results show that indoor navigation agents refer to both object and direction tokens when making decisions. In contrast, outdoor navigation agents heavily rely on direction tokens and poorly understand the object tokens. Transformer-based agents acquire a better cross-modal understanding of objects and display strong numerical reasoning ability than non-Transformer-based agents. When it comes to vision-and-language alignments, many models claim that they can align object tokens with specific visual targets. We find unbalanced attention on the vision and text input and doubt the reliability of such cross-modal alignments. | null | null | 10.18653/v1/2022.naacl-main.438 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,137 |
inproceedings | ammanabrolu-etal-2022-aligning | Aligning to Social Norms and Values in Interactive Narratives | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-main.439/ | Ammanabrolu, Prithviraj and Jiang, Liwei and Sap, Maarten and Hajishirzi, Hannaneh and Choi, Yejin | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies | 5994--6017 | We focus on creating agents that act in alignment with socially beneficial norms and values in interactive narratives or text-based games{---}environments wherein an agent perceives and interacts with a world through natural language. Such interactive agents are often trained via reinforcement learning to optimize task performance, even when such rewards may lead to agent behaviors that violate societal norms{---}causing harm either to the agent itself or other entities in the environment. Social value alignment refers to creating agents whose behaviors conform to expected moral and social norms for a given context and group of people{---}in our case, it means agents that behave in a manner that is less harmful and more beneficial for themselves and others. We build on the Jiminy Cricket benchmark (Hendrycks et al. 2021), a set of 25 annotated interactive narratives containing thousands of morally salient scenarios covering everything from theft and bodily harm to altruism. We introduce the GALAD (Game-value ALignment through Action Distillation) agent that uses the social commonsense knowledge present in specially trained language models to contextually restrict its action space to only those actions that are aligned with socially beneficial values. An experimental study shows that the GALAD agent makes decisions efficiently enough to improve state-of-the-art task performance by 4{\%} while reducing the frequency of socially harmful behaviors by 25{\%} compared to strong contemporary value alignment approaches. | null | null | 10.18653/v1/2022.naacl-main.439 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,138 |
inproceedings | zhang-wan-2022-mover | {MOVER}: Mask, Over-generate and Rank for Hyperbole Generation | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-main.440/ | Zhang, Yunxiang and Wan, Xiaojun | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies | 6018--6030 | Despite being a common figure of speech, hyperbole is under-researched in Figurative Language Processing. In this paper, we tackle the challenging task of hyperbole generation to transfer a literal sentence into its hyperbolic paraphrase. To address the lack of available hyperbolic sentences, we construct HYPO-XL, the first large-scale English hyperbole corpus containing 17,862 hyperbolic sentences in a non-trivial way. Based on our corpus, we propose an unsupervised method for hyperbole generation that does not require parallel literal-hyperbole pairs. During training, we fine-tune BART to infill masked hyperbolic spans of sentences from HYPO-XL. During inference, we mask part of an input literal sentence and over-generate multiple possible hyperbolic versions. Then a BERT-based ranker selects the best candidate by hyperbolicity and paraphrase quality. Automatic and human evaluation results show that our model is effective at generating hyperbolic paraphrase sentences and outperforms several baseline systems. | null | null | 10.18653/v1/2022.naacl-main.440 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,139 |
inproceedings | kadikis-etal-2022-embarrassingly | Embarrassingly Simple Performance Prediction for Abductive Natural Language Inference | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-main.441/ | Kadi{\c{k}}is, Em{\={i}}ls and Srivastav, Vaibhav and Klinger, Roman | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies | 6031--6037 | The task of natural language inference (NLI), to decide if a hypothesis entails or contradicts a premise, received considerable attention in recent years. All competitive systems build on top of contextualized representations and make use of transformer architectures for learning an NLI model. When somebody is faced with a particular NLI task, they need to select the best model that is available. This is a time-consuming and resource-intense endeavour. To solve this practical problem, we propose a simple method for predicting the performance without actually fine-tuning the model. We do this by testing how well the pre-trained models perform on the aNLI task when just comparing sentence embeddings with cosine similarity to what kind of performance is achieved when training a classifier on top of these embeddings. We show that the accuracy of the cosine similarity approach correlates strongly with the accuracy of the classification approach with a Pearson correlation coefficient of 0.65. Since the similarity is orders of magnitude faster to compute on a given dataset (less than a minute vs. hours), our method can lead to significant time savings in the process of model selection. | null | null | 10.18653/v1/2022.naacl-main.441 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,140 |
inproceedings | deutsch-etal-2022-examining | Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics | Carpuat, Marine and de Marneffe, Marie-Catherine and Meza Ruiz, Ivan Vladimir | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-main.442/ | Deutsch, Daniel and Dror, Rotem and Roth, Dan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies | 6038--6052 | How reliably an automatic summarization evaluation metric replicates human judgments of summary quality is quantified by system-level correlations. We identify two ways in which the definition of the system-level correlation is inconsistent with how metrics are used to evaluate systems in practice and propose changes to rectify this disconnect. First, we calculate the system score for an automatic metric using the full test set instead of the subset of summaries judged by humans, which is currently standard practice. We demonstrate how this small change leads to more precise estimates of system-level correlations. Second, we propose to calculate correlations only on pairs of systems that are separated by small differences in automatic scores which are commonly observed in practice. This allows us to demonstrate that our best estimate of the correlation of ROUGE to human judgments is near 0 in realistic scenarios. The results from the analyses point to the need to collect more high-quality human judgments and to improve automatic metrics when differences in system scores are small. | null | null | 10.18653/v1/2022.naacl-main.442 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,141 |
inproceedings | chakravarthy-etal-2022-systematicity | Systematicity Emerges in Transformers when Abstract Grammatical Roles Guide Attention | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.1/ | Chakravarthy, Ayush K and Russin, Jacob Labe and O{'}Reilly, Randall | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 1--8 | Systematicity is thought to be a key inductive bias possessed by humans that is lacking in standard natural language processing systems such as those utilizing transformers. In this work, we investigate the extent to which the failure of transformers on systematic generalization tests can be attributed to a lack of linguistic abstraction in its attention mechanism. We develop a novel modification to the transformer by implementing two separate input streams: a role stream controls the attention distributions (i.e., queries and keys) at each layer, and a filler stream determines the values. Our results show that when abstract role labels are assigned to input sequences and provided to the role stream, systematic generalization is improved. | null | null | 10.18653/v1/2022.naacl-srw.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,143 |
inproceedings | choudhary-kawahara-2022-grounding | Grounding in social media: An approach to building a chit-chat dialogue model | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.2/ | Choudhary, Ritvik and Kawahara, Daisuke | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 9--15 | Building open-domain dialogue systems capable of rich human-like conversational ability is one of the fundamental challenges in language generation. However, even with recent advancements in the field, existing open-domain generative models fail to capture and utilize external knowledge, leading to repetitive or generic responses to unseen utterances. Current work on knowledge-grounded dialogue generation primarily focuses on persona incorporation or searching a fact-based structured knowledge source such as Wikipedia. Our method takes a broader and simpler approach, which aims to improve the raw conversation ability of the system by mimicking the human response behavior through casual interactions found on social media. Utilizing a joint retriever-generator setup, the model queries a large set of filtered comment data from Reddit to act as additional context for the seq2seq generator. Automatic and human evaluations on open-domain dialogue datasets demonstrate the effectiveness of our approach. | null | null | 10.18653/v1/2022.naacl-srw.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,144 |
inproceedings | loem-etal-2022-extraphrase | {E}xtra{P}hrase: Efficient Data Augmentation for Abstractive Summarization | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.3/ | Loem, Mengsay and Takase, Sho and Kaneko, Masahiro and Okazaki, Naoaki | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 16--24 | Neural models trained with large amount of parallel data have achieved impressive performance in abstractive summarization tasks. However, large-scale parallel corpora are expensive and challenging to construct. In this work, we introduce a low-cost and effective strategy, ExtraPhrase, to augment training data for abstractive summarization tasks. ExtraPhrase constructs pseudo training data in two steps: extractive summarization and paraphrasing. We extract major parts of an input text in the extractive summarization step and obtain its diverse expressions with the paraphrasing step. Through experiments, we show that ExtraPhrase improves the performance of abstractive summarization tasks by more than 0.50 points in ROUGE scores compared to the setting without data augmentation. ExtraPhrase also outperforms existing methods such as back-translation and self-training. We also show that ExtraPhrase is significantly effective when the amount of genuine training data is remarkably small, i.e., a low-resource setting. Moreover, ExtraPhrase is more cost-efficient than the existing approaches | null | null | 10.18653/v1/2022.naacl-srw.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,145 |
inproceedings | ton-etal-2022-regularized | Regularized Training of Nearest Neighbor Language Models | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.4/ | Ton, Jean-Francois and Talbott, Walter and Zhai, Shuangfei and Susskind, Joshua M. | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 25--30 | Including memory banks in a natural language processing architecture increases model capacity by equipping it with additional data at inference time. In this paper, we build upon $k$NN-LM (CITATION), which uses a pre-trained language model together with an exhaustive $k$NN search through the training data (memory bank) to achieve state-of-the-art results. We investigate whether we can improve the $k$NN-LM performance by instead training a LM with the knowledge that we will be using a $k$NN post-hoc. We achieved significant improvement using our method on language modeling tasks on WIKI-2 and WIKI-103. The main phenomenon that we encounter is that adding a simple L2 regularization on the activations (not weights) of the model, a transformer, improves the post-hoc $k$NN classification performance. We explore some possible reasons for this improvement. In particular, we find that the added L2 regularization seems to improve the performance for high-frequency words without deteriorating the performance for low frequency ones. | null | null | 10.18653/v1/2022.naacl-srw.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,146 |
inproceedings | yu-2022-dozens | {\textquotedblleft}Again, Dozens of Refugees Drowned{\textquotedblright}: A Computational Study of Political Framing Evoked by Presuppositions | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.5/ | Yu, Qi | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 31--43 | Earlier NLP studies on framing in political discourse have focused heavily on shallow classification of issue framing, while framing effect arising from pragmatic cues remains neglected. We put forward this latter type of framing as {\textquotedblleft}pragmatic framing{\textquotedblright}. To bridge this gap, we take presupposition-triggering adverbs such as {\textquoteleft}again' as a study case, and quantitatively investigate how different German newspapers use them to covertly evoke different attitudinal subtexts in their report on the event {\textquotedblleft}European Refugee Crisis{\textquotedblright} (2014-2018). Our study demonstrates the crucial role of presuppositions in framing, and emphasizes the necessity of more attention on pragmatic framing in the research of automated framing detection. | null | null | 10.18653/v1/2022.naacl-srw.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,147 |
inproceedings | stefanik-2022-methods | Methods for Estimating and Improving Robustness of Language Models | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.6/ | Stefanik, Michal | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 44--51 | Despite their outstanding performance, large language models (LLMs) suffer notorious flaws related to their preference for shallow textual relations over full semantic complexity of the problem. This proposal investigates a common denominator of this problem in their weak ability to generalise outside of the training domain. We survey diverse research directions providing estimations of model generalisation ability and find that incorporating some of these measures in the training objectives leads to enhanced distributional robustness of neural models. Based on these findings, we present future research directions enhancing the robustness of LLMs. | null | null | 10.18653/v1/2022.naacl-srw.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,148 |
inproceedings | yu-2022-retrieval | Retrieval-augmented Generation across Heterogeneous Knowledge | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.7/ | Yu, Wenhao | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 52--58 | Retrieval-augmented generation (RAG) methods have been receiving increasing attention from the NLP community and achieved state-of-the-art performance on many NLP downstream tasks. Compared with conventional pre-trained generation models, RAG methods have remarkable advantages such as easy knowledge acquisition, strong scalability, and low training cost. Although existing RAG models have been applied to various knowledge-intensive NLP tasks, such as open-domain QA and dialogue systems, most of the work has focused on retrieving unstructured text documents from Wikipedia. In this paper, I first elaborate on the current obstacles to retrieving knowledge from a single-source homogeneous corpus. Then, I demonstrate evidence from both existing literature and my experiments, and provide multiple solutions on retrieval-augmented generation methods across heterogeneous knowledge. | null | null | 10.18653/v1/2022.naacl-srw.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,149 |
inproceedings | luo-2022-neural | Neural Retriever and Go Beyond: A Thesis Proposal | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.8/ | Luo, Man | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 59--67 | Information Retriever (IR) aims to find the relevant documents (e.g. snippets, passages, and articles) to a given query at large scale. IR plays an important role in many tasks such as open domain question answering and dialogue systems, where external knowledge is needed. In the past, searching algorithms based on term matching have been widely used. Recently, neural-based algorithms (termed as neural retrievers) have gained more attention which can mitigate the limitations of traditional methods. Regardless of the success achieved by neural retrievers, they still face many challenges, e.g. suffering from a small amount of training data and failing to answer simple entity-centric questions. Furthermore, most of the existing neural retrievers are developed for pure-text query. This prevents them from handling multi-modality queries (i.e. the query is composed of textual description and images). This proposal has two goals. First, we introduce methods to address the abovementioned issues of neural retrievers from three angles, new model architectures, IR-oriented pretraining tasks, and generating large scale training data. Second, we identify the future research direction and propose potential corresponding solution. | null | null | 10.18653/v1/2022.naacl-srw.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,150 |
inproceedings | ding-etal-2022-improving | Improving Classification of Infrequent Cognitive Distortions: Domain-Specific Model vs. Data Augmentation | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.9/ | Ding, Xiruo and Lybarger, Kevin and Tauscher, Justin and Cohen, Trevor | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 68--75 | Cognitive distortions are counterproductive patterns of thinking that are one of the targets of cognitive behavioral therapy (CBT). These can be challenging for clinicians to detect, especially those without extensive CBT training or supervision. Text classification methods can approximate expert clinician judgment in the detection of frequently occurring cognitive distortions in text-based therapy messages. However, performance with infrequent distortions is relatively poor. In this study, we address this sparsity problem with two approaches: Data Augmentation and Domain-Specific Model. The first approach includes Easy Data Augmentation, back translation, and mixup techniques. The second approach utilizes a domain-specific pretrained language model, MentalBERT. To examine the viability of different data augmentation methods, we utilized a real-world dataset of texts between therapists and clients diagnosed with serious mental illness that was annotated for distorted thinking. We found that with optimized parameter settings, mixup was helpful for rare classes. Performance improvements with an augmented model, MentalBERT, exceed those obtained with data augmentation. | null | null | 10.18653/v1/2022.naacl-srw.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,151 |
inproceedings | sakaeda-kawahara-2022-generate | Generate, Evaluate, and Select: A Dialogue System with a Response Evaluator for Diversity-Aware Response Generation | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.10/ | Sakaeda, Ryoma and Kawahara, Daisuke | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 76--82 | We aim to overcome the lack of diversity in responses of current dialogue systems and to develop a dialogue system that is engaging as a conversational partner. We propose a generator-evaluator model that evaluates multiple responses generated by a response generator and selects the best response by an evaluator. By generating multiple responses, we obtain diverse responses. We conduct human evaluations to compare the output of the proposed system with that of a baseline system. The results of the human evaluations showed that the proposed system`s responses were often judged to be better than the baseline system`s, and indicated the effectiveness of the proposed method. | null | null | 10.18653/v1/2022.naacl-srw.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,152 |
inproceedings | salhofer-etal-2022-impact | Impact of Training Instance Selection on Domain-Specific Entity Extraction using {BERT} | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.11/ | Salhofer, Eileen and Liu, Xing Lan and Kern, Roman | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 83--88 | State of the art performances for entity extraction tasks are achieved by supervised learning, specifically, by fine-tuning pretrained language models such as BERT. As a result, annotating application specific data is the first step in many use cases. However, no practical guidelines are available for annotation requirements. This work supports practitioners by empirically answering the frequently asked questions (1) how many training samples to annotate? (2) which examples to annotate? We found that BERT achieves up to 80{\%} F1 when fine-tuned on only 70 training examples, especially on biomedical domain. The key features for guiding the selection of high performing training instances are identified to be pseudo-perplexity and sentence-length. The best training dataset constructed using our proposed selection strategy shows F1 score that is equivalent to a random selection with twice the sample size. The requirement of only a small number of training data implies cheaper implementations and opens door to wider range of applications. | null | null | 10.18653/v1/2022.naacl-srw.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,153 |
inproceedings | hatami-etal-2022-analysing | Analysing the Correlation between Lexical Ambiguity and Translation Quality in a Multimodal Setting using {W}ord{N}et | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.12/ | Hatami, Ali and Buitelaar, Paul and Arcan, Mihael | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 89--95 | Multimodal Neural Machine Translation is focusing on using visual information to translate sentences in the source language into the target language. The main idea is to utilise information from visual modalities to promote the output quality of the text-based translation model. Although the recent multimodal strategies extract the most relevant visual information in images, the effectiveness of using visual information on translation quality changes based on the text dataset. Due to this, this work studies the impact of leveraging visual information in multimodal translation models of ambiguous sentences. Our experiments analyse the Multi30k evaluation dataset and calculate ambiguity scores of sentences based on the WordNet hierarchical structure. To calculate the ambiguity of a sentence, we extract the ambiguity scores for all nouns based on the number of senses in WordNet. The main goal is to find in which sentences, visual content can improve the text-based translation model. We report the correlation between the ambiguity scores and translation quality extracted for all sentences in the English-German dataset. | null | null | 10.18653/v1/2022.naacl-srw.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,154 |
inproceedings | kasahara-etal-2022-building | Building a Personalized Dialogue System with Prompt-Tuning | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.13/ | Kasahara, Tomohito and Kawahara, Daisuke and Tung, Nguyen and Li, Shengzhe and Shinzato, Kenta and Sato, Toshinori | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 96--105 | Dialogue systems without consistent responses are not attractive. In this study, we build a dialogue system that can respond based on a given character setting (persona) to bring consistency. Considering the trend of the rapidly increasing scale of language models, we propose an approach that uses prompt-tuning, which has low learning costs, on pre-trained large-scale language models. The results of the automatic and manual evaluations in English and Japanese show that it is possible to build a dialogue system with more natural and personalized responses with less computational resources than fine-tuning. | null | null | 10.18653/v1/2022.naacl-srw.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,155 |
inproceedings | seo-etal-2022-mm | {MM}-{GATBT}: Enriching Multimodal Representation Using Graph Attention Network | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.14/ | Seo, Seung Byum and Nam, Hyoungwook and Delgosha, Payam | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 106--112 | While there have been advances in Natural Language Processing (NLP), their success is mainly gained by applying a self-attention mechanism into single or multi-modalities. While this approach has brought significant improvements in multiple downstream tasks, it fails to capture the interaction between different entities. Therefore, we propose MM-GATBT, a multimodal graph representation learning model that captures not only the relational semantics within one modality but also the interactions between different modalities. Specifically, the proposed method constructs image-based node embedding which contains relational semantics of entities. Our empirical results show that MM-GATBT achieves state-of-the-art results among all published papers on the MM-IMDb dataset. | null | null | 10.18653/v1/2022.naacl-srw.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,156 |
inproceedings | richard-2022-simulating | Simulating Feature Structures with Simple Types | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.15/ | Richard, Valentin D. | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 113--122 | Feature structures have been several times considered to enrich categorial grammars in order to build fine-grained grammars. Most attempts to unify both frameworks either model categorial types as feature structures or add feature structures on top of categorial types. We pursue a different approach: using feature structure as categorial atomic types. In this article, we present a procedure to create, from a simplified HPSG grammar, an equivalent abstract categorial grammar (ACG). We represent a feature structure by the enumeration of its totally well-typed upper bounds, so that unification can be simulated as intersection. We implement this idea as a meta-ACG preprocessor. | null | null | 10.18653/v1/2022.naacl-srw.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,157 |
inproceedings | konovalova-etal-2022-dr | Dr. Livingstone, {I} presume? Polishing of foreign character identification in literary texts | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.16/ | Konovalova, Aleksandra and Toral, Antonio and Taivalkoski-Shilov, Kristiina | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 123--128 | Character identification is a key element for many narrative-related tasks. To implement it, the baseform of the name of the character (or lemma) needs to be identified, so different appearances of the same character in the narrative could be aligned. In this paper we tackle this problem in translated texts (English{--}Finnish translation direction), where the challenge regarding lemmatizing foreign names in an agglutinative language appears. To solve this problem, we present and compare several methods. The results show that the method based on a search for the shortest version of the name proves to be the easiest, best performing (83.4{\%} F1), and most resource-independent. | null | null | 10.18653/v1/2022.naacl-srw.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,158 |
inproceedings | pan-etal-2022-zuo | Zuo Zhuan {A}ncient {C}hinese Dataset for Word Sense Disambiguation | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.17/ | Pan, Xiaomeng and Wang, Hongfei and Oka, Teruaki and Komachi, Mamoru | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 129--135 | Word Sense Disambiguation (WSD) is a core task in Natural Language Processing (NLP). Ancient Chinese has rarely been used in WSD tasks, however, as no public dataset for ancient Chinese WSD tasks exists. Creation of an ancient Chinese dataset is considered a significant challenge because determining the most appropriate sense in a context is difficult and time-consuming owing to the different usages in ancient and modern Chinese. Actually, no public dataset for ancient Chinese WSD tasks exists. To solve the problem of ancient Chinese WSD, we annotate part of Pre-Qin (221 BC) text \textit{Zuo Zhuan} using a copyright-free dictionary to create a public sense-tagged dataset. Then, we apply a simple Nearest Neighbors (k-NN) method using a pre-trained language model to the dataset. Our code and dataset will be available on GitHub. | null | null | 10.18653/v1/2022.naacl-srw.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,159 |
inproceedings | phan-etal-2022-vit5 | {V}i{T}5: Pretrained Text-to-Text Transformer for {V}ietnamese Language Generation | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.18/ | Phan, Long and Tran, Hieu and Nguyen, Hieu and Trinh, Trieu H. | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 136--142 | We present ViT5, a pretrained Transformer-based encoder-decoder model for the Vietnamese language. With T5-style self-supervised pretraining, ViT5 is trained on a large corpus of high-quality and diverse Vietnamese texts. We benchmark ViT5 on two downstream text generation tasks, Abstractive Text Summarization and Named Entity Recognition. Although Abstractive Text Summarization has been widely studied for the English language thanks to its rich and large source of data, there has been minimal research into the same task in Vietnamese, a much lower resource language. In this work, we perform exhaustive experiments on both Vietnamese Abstractive Summarization and Named Entity Recognition, validating the performance of ViT5 against many other pretrained Transformer-based encoder-decoder models. Our experiments show that ViT5 significantly outperforms existing models and achieves state-of-the-art results on Vietnamese Text Summarization. On the task of Named Entity Recognition, ViT5 is competitive against previous best results from pretrained encoder-based Transformer models. Further analysis shows the importance of context length during the self-supervised pretraining on downstream performance across different settings. | null | null | 10.18653/v1/2022.naacl-srw.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,160 |
inproceedings | spilsbury-ilin-2022-compositional | Compositional Generalization in Grounded Language Learning via Induced Model Sparsity | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.19/ | Spilsbury, Sam and Ilin, Alexander | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 143--155 | We provide a study of how induced model sparsity can help achieve compositional generalization and better sample efficiency in grounded language learning problems. We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations. We show that standard neural architectures do not always yield compositional generalization. To address this, we design an agent that contains a goal identification module that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal. The output of the goal identification module is the input to a value iteration network planner. Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations. We examine the internal representations of our agent and find the correct correspondences between words in its dictionary and attributes in the environment. | null | null | 10.18653/v1/2022.naacl-srw.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,161 |
inproceedings | chen-etal-2022-people | How do people talk about images? A study on open-domain conversations with images. | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.20/ | Chen, Yi-Pei and Shimizu, Nobuyuki and Miyazaki, Takashi and Nakayama, Hideki | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 156--162 | This paper explores how humans conduct conversations with images by investigating an open-domain image conversation dataset, ImageChat. We examined the conversations with images from the perspectives of $\textit{image relevancy}$ and $\textit{image information}$. We found that utterances/conversations are not always related to the given image, and conversation topics diverge within three turns about half of the time. Besides image objects, more comprehensive non-object image information is also indispensable. After inspecting the causes, we suggested that understanding the overall scenario of image and connecting objects based on their high-level attributes might be very helpful to generate more engaging open-domain conversations when an image is presented. We proposed enriching the image information with image caption and object tags based on our analysis. With our proposed $\textit{image}^{+}$ features, we improved automatic metrics including BLEU and Bert Score, and increased the diversity and image-relevancy of generated responses to the strong baseline. The result verifies that our analysis provides valuable insights and could facilitate future research on open-domain conversations with images. | null | null | 10.18653/v1/2022.naacl-srw.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,162 |
inproceedings | tokpo-calders-2022-text | Text Style Transfer for Bias Mitigation using Masked Language Modeling | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.21/ | Tokpo, Ewoenam Kwaku and Calders, Toon | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 163--171 | It is well known that textual data on the internet and other digital platforms contain significant levels of bias and stereotypes. Various research findings have concluded that biased texts have significant effects on target demographic groups. For instance, masculine-worded job advertisements tend to be less appealing to female applicants. In this paper, we present a text-style transfer model that can be trained on non-parallel data and be used to automatically mitigate bias in textual data. Our style transfer model improves on the limitations of many existing text style transfer techniques such as the loss of content information. Our model solves such issues by combining latent content encoding with explicit keyword replacement. We will show that this technique produces better content preservation whilst maintaining good style transfer accuracy. | null | null | 10.18653/v1/2022.naacl-srw.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,163 |
inproceedings | xie-hong-2022-differentially | Differentially Private Instance Encoding against Privacy Attacks | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.22/ | Xie, Shangyu and Hong, Yuan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 172--180 | TextHide was recently proposed to protect the training data via instance encoding in natural language domain. Due to the lack of theoretic privacy guarantee, such instance encoding scheme has been shown to be vulnerable against privacy attacks, e.g., reconstruction attack. To address such limitation, we revise the instance encoding scheme with differential privacy and thus provide a provable guarantee against privacy attacks. The experimental results also show that the proposed scheme can defend against privacy attacks while ensuring learning utility (as a trade-off). | null | null | 10.18653/v1/2022.naacl-srw.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,164 |
inproceedings | luo-etal-2022-simple | A Simple Approach to Jointly Rank Passages and Select Relevant Sentences in the {OBQA} Context | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.23/ | Luo, Man and Chen, Shuguang and Baral, Chitta | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 181--187 | In the open book question answering (OBQA) task, selecting the relevant passages and sentences from distracting information is crucial to reason the answer to a question. HotpotQA dataset is designed to teach and evaluate systems to do both passage ranking and sentence selection. Many existing frameworks use separate models to select relevant passages and sentences respectively. Such systems not only have high complexity in terms of the parameters of models but also fail to take the advantage of training these two tasks together since one task can be beneficial for the other one. In this work, we present a simple yet effective framework to address these limitations by jointly ranking passages and selecting sentences. Furthermore, we propose consistency and similarity constraints to promote the correlation and interaction between passage ranking and sentence selection. The experiments demonstrate that our framework can achieve competitive results with previous systems and outperform the baseline by 28{\%} in terms of exact matching of relevant sentences on the HotpotQA dataset. | null | null | 10.18653/v1/2022.naacl-srw.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,165 |
inproceedings | mince-etal-2022-multimodal | Multimodal Modeling of Task-Mediated Confusion | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.24/ | Mince, Camille and Rhomberg, Skye and Alm, Cecilia and Bailey, Reynold and Ororbia, Alex | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 188--194 | In order to build more human-like cognitive agents, systems capable of detecting various human emotions must be designed to respond appropriately. Confusion, the combination of an emotional and cognitive state, is under-explored. In this paper, we build upon prior work to develop models that detect confusion from three modalities: video (facial features), audio (prosodic features), and text (transcribed speech features). Our research improves the data collection process by allowing for continuous (as opposed to discrete) annotation of confusion levels. We also craft models based on recurrent neural networks (RNNs) given their ability to predict sequential data. In our experiments, we find that text and video modalities are the most important in predicting confusion while the explored audio features are relatively unimportant predictors of confusion in our data. | null | null | 10.18653/v1/2022.naacl-srw.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,166 |
inproceedings | aoyama-schneider-2022-probe | Probe-Less Probing of {BERT}`s Layer-Wise Linguistic Knowledge with Masked Word Prediction | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.25/ | Aoyama, Tatsuya and Schneider, Nathan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 195--201 | The current study quantitatively (and qualitatively for an illustrative purpose) analyzes BERT`s layer-wise masked word prediction on an English corpus, and finds that (1) the layerwise localization of linguistic knowledge primarily shown in probing studies is replicated in a behavior-based design and (2) that syntactic and semantic information is encoded at different layers for words of different syntactic categories. Hypothesizing that the above results are correlated with the number of likely potential candidates of the masked word prediction, we also investigate how the results differ for tokens within multiword expressions. | null | null | 10.18653/v1/2022.naacl-srw.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,167 |
inproceedings | lewis-2022-multimodal | Multimodal large language models for inclusive collaboration learning tasks | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.26/ | Lewis, Armanda | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 202--210 | This PhD project leverages advancements in multimodal large language models to build an inclusive collaboration feedback loop, in order to facilitate the automated detection, modeling, and feedback for participants developing general collaboration skills. This topic is important given the role of collaboration as an essential 21st century skill, the potential to ground large language models within learning theory and real-world practice, and the expressive potential of transformer models to support equity and inclusion. We address some concerns of integrating advances in natural language processing into downstream tasks such as the learning analytics feedback loop. | null | null | 10.18653/v1/2022.naacl-srw.26 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,168 |
inproceedings | takeuchi-etal-2022-neural | Neural Networks in a Product of Hyperbolic Spaces | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.27/ | Takeuchi, Jun and Nishida, Noriki and Nakayama, Hideki | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 211--221 | Machine learning in hyperbolic spaces has attracted much attention in natural language processing and many other fields. In particular, Hyperbolic Neural Networks (HNNs) have improved a wide variety of tasks, from machine translation to knowledge graph embedding. Although some studies have reported the effectiveness of embedding into the product of multiple hyperbolic spaces, HNNs have mainly been constructed in a single hyperbolic space, and their extension to product spaces has not been sufficiently studied. Therefore, we propose a novel method to extend a given HNN in a single space to a product of hyperbolic spaces. We apply our method to Hyperbolic Graph Convolutional Networks (HGCNs), extending several HNNs. Our model improved the graph node classification accuracy especially on datasets with tree-like structures. The results suggest that neural networks in a product of hyperbolic spaces can be more effective than in a single space in representing structural data. | null | null | 10.18653/v1/2022.naacl-srw.27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,169 |
inproceedings | yoshikoshi-etal-2022-explicit | Explicit Use of Topicality in Dialogue Response Generation | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.28/ | Yoshikoshi, Takumi and Atarashi, Hayato and Kodama, Takashi and Kurohashi, Sadao | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 222--228 | The current chat dialogue systems implicitly consider the topic given the context, but not explicitly. As a result, these systems often generate inconsistent responses with the topic of the moment. In this study, we propose a dialogue system that responds appropriately following the topic by selecting the entity with the highest {\textquotedblleft}topicality.{\textquotedblright} In topicality estimation, the model is trained through self-supervised learning that regards entities that appear in both context and response as the topic entities. In response generation, the model is trained to generate topic-relevant responses based on the estimated topicality. Experimental results show that our proposed system can follow the topic more than the existing dialogue system that considers only the context. | null | null | 10.18653/v1/2022.naacl-srw.28 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,170 |
inproceedings | a-2022-automating | Automating Human Evaluation of Dialogue Systems | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.29/ | A, Sujan Reddy | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 229--234 | Automated metrics to evaluate dialogue systems like BLEU, METEOR, etc., weakly correlate with human judgments. Thus, human evaluation is often used to supplement these metrics for system evaluation. However, human evaluation is time-consuming as well as expensive. This paper provides an alternative approach to human evaluation with respect to three aspects: naturalness, informativeness, and quality in dialogue systems. I propose an approach based on fine-tuning the BERT model with three prediction heads, to predict whether the system-generated output is natural, fluent, and informative. I observe that the proposed model achieves an average accuracy of around 77{\%} over these 3 labels. I also design a baseline approach that uses three different BERT models to make the predictions. Based on experimental analysis, I find that using a shared model to compute the three labels performs better than three separate models. | null | null | 10.18653/v1/2022.naacl-srw.29 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,171 |
inproceedings | culjak-etal-2022-strong | Strong Heuristics for Named Entity Linking | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.30/ | {\v{C}}uljak, Marko and Spitz, Andreas and West, Robert and Arora, Akhil | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 235--246 | Named entity linking (NEL) in news is a challenging endeavour due to the frequency of unseen and emerging entities, which necessitates the use of unsupervised or zero-shot methods. However, such methods tend to come with caveats, such as no integration of suitable knowledge bases (like Wikidata) for emerging entities, a lack of scalability, and poor interpretability. Here, we consider person disambiguation in Quotebank, a massive corpus of speaker-attributed quotations from the news, and investigate the suitability of intuitive, lightweight, and scalable heuristics for NEL in web-scale corpora. Our best performing heuristic disambiguates 94{\%} and 63{\%} of the mentions on Quotebank and the AIDA-CoNLL benchmark, respectively. Additionally, the proposed heuristics compare favourably to the state-of-the-art unsupervised and zero-shot methods, Eigenthemes and mGENRE, respectively, thereby serving as strong baselines for unsupervised and zero-shot entity linking. | null | null | 10.18653/v1/2022.naacl-srw.30 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,172 |
inproceedings | saxena-etal-2022-static | Static and Dynamic Speaker Modeling based on Graph Neural Network for Emotion Recognition in Conversation | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.31/ | Saxena, Prakhar and Huang, Yin Jou and Kurohashi, Sadao | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 247--253 | Each person has a unique personality which affects how they feel and convey emotions. Hence, speaker modeling is important for the task of emotion recognition in conversation (ERC). In this paper, we propose a novel graph-based ERC model which considers both conversational context and speaker personality. We model the internal state of the speaker (personality) as Static and Dynamic speaker state, where the Dynamic speaker state is modeled with a graph neural network based encoder. Experiments on benchmark dataset shows the effectiveness of our model. Our model outperforms baseline and other graph-based methods. Analysis of results also show the importance of explicit speaker modeling. | null | null | 10.18653/v1/2022.naacl-srw.31 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,173 |
inproceedings | simoulin-crabbe-2022-unifying | Unifying Parsing and Tree-Structured Models for Generating Sentence Semantic Representations | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.33/ | Simoulin, Antoine and Crabb{\'e}, Benoit | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 267--276 | We introduce a novel tree-based model that learns its composition function together with its structure. The architecture produces sentence embeddings by composing words according to an induced syntactic tree. The parsing and the composition functions are explicitly connected and, therefore, learned jointly. As a result, the sentence embedding is computed according to an interpretable linguistic pattern and may be used on any downstream task. We evaluate our encoder on downstream tasks, and we observe that it outperforms tree-based models relying on external parsers. In some configurations, it is even competitive with Bert base model. Our model is capable of supporting multiple parser architectures. We exploit this property to conduct an ablation study by comparing different parser initializations. We explore to which extent the trees produced by our model compare with linguistic structures and how this initialization impacts downstream performances. We empirically observe that downstream supervision troubles producing stable parses and preserving linguistically relevant structures. | null | null | 10.18653/v1/2022.naacl-srw.33 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,175 |
inproceedings | sant-etal-2022-multiformer | Multiformer: A Head-Configurable Transformer-Based Model for Direct Speech Translation | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.34/ | Sant, Gerard and G{\'a}llego, Gerard I. and Alastruey, Belen and Costa-juss{\`a}, Marta Ruiz | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 277--284 | Transformer-based models have been achieving state-of-the-art results in several fields of Natural Language Processing. However, its direct application to speech tasks is not trivial. The nature of this sequences carries problems such as long sequence lengths and redundancy between adjacent tokens. Therefore, we believe that regular self-attention mechanism might not be well suited for it. Different approaches have been proposed to overcome these problems, such as the use of efficient attention mechanisms. However, the use of these methods usually comes with a cost, which is a performance reduction caused by information loss. In this study, we present the Multiformer, a Transformer-based model which allows the use of different attention mechanisms on each head. By doing this, the model is able to bias the self-attention towards the extraction of more diverse token interactions, and the information loss is reduced. Finally, we perform an analysis of the head contributions, and we observe that those architectures where all heads relevance is uniformly distributed obtain better results. Our results show that mixing attention patterns along the different heads and layers outperforms our baseline by up to 0.7 BLEU. | null | null | 10.18653/v1/2022.naacl-srw.34 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,176 |
inproceedings | auersperger-pecina-2022-defending | Defending Compositionality in Emergent Languages | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.35/ | Auersperger, Michal and Pecina, Pavel | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 285--291 | Compositionality has traditionally been understood as a major factor in productivity of language and, more broadly, human cognition. Yet, recently some research started to question its status showing that artificial neural networks are good at generalization even without noticeable compositional behavior. We argue some of these conclusions are too strong and/or incomplete. In the context of a two-agent communication game, we show that compositionality indeed seems essential for successful generalization when the evaluation is done on a suitable dataset. | null | null | 10.18653/v1/2022.naacl-srw.35 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,177 |
inproceedings | yadavalli-etal-2022-exploring | Exploring the Effect of Dialect Mismatched Language Models in {T}elugu Automatic Speech Recognition | Ippolito, Daphne and Li, Liunian Harold and Pacheco, Maria Leonor and Chen, Danqi and Xue, Nianwen | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-srw.36/ | Yadavalli, Aditya and Mirishkar, Ganesh Sai and Vuppala, Anil | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop | 292--301 | Previous research has found that Acoustic Models (AM) of an Automatic Speech Recognition (ASR) system are susceptible to dialect variations within a language, thereby adversely affecting the ASR. To counter this, researchers have proposed to build a dialect-specific AM while keeping the Language Model (LM) constant for all the dialects. This study explores the effect of dialect mismatched LM by considering three different Telugu regional dialects: Telangana, Coastal Andhra, and Rayalaseema. We show that dialect variations that surface in the form of a different lexicon, grammar, and occasionally semantics can significantly degrade the performance of the LM under mismatched conditions. Therefore, this degradation has an adverse effect on the ASR even when dialect-specific AM is used. We show a degradation of up to 13.13 perplexity points when LM is used under mismatched conditions. Furthermore, we show a degradation of over 9{\%} and over 15{\%} in Character Error Rate (CER) and Word Error Rate (WER), respectively, in the ASR systems when using mismatched LMs over matched LMs. | null | null | 10.18653/v1/2022.naacl-srw.36 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,178 |
inproceedings | kharitonov-etal-2022-textless | textless-lib: a Library for Textless Spoken Language Processing | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.1/ | Kharitonov, Eugene and Copet, Jade and Lakhotia, Kushal and Nguyen, Tu Anh and Tomasello, Paden and Lee, Ann and Elkahky, Ali and Hsu, Wei-Ning and Mohamed, Abdelrahman and Dupoux, Emmanuel and Adi, Yossi | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 1--9 | Textless spoken language processing is an exciting area of research that promises to extend applicability of the standard NLP toolset onto spoken language and languages with few or no textual resources. Here, we introduce textless-lib, a PyTorch-based library aimed to facilitate research in the area. We describe the building blocks that the library provides and demonstrate its usability by discuss three different use-case examples: (i) speaker probing, (ii) speech resynthesis and compression, and (iii) speech continuation. We believe that textless-lib substantially simplifies research the textless setting and will be handful not only for speech researchers but also for the NLP community at large. | null | null | 10.18653/v1/2022.naacl-demo.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,180 |
inproceedings | kyjanek-2022-web | Web-based Annotation Interface for Derivational Morphology | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.2/ | Kyj{\'a}nek, Luk{\'a}{\v{s}} | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 10--16 | The paper presents a visual interface for manual annotation of language resources for derivational morphology. The interface is web-based and created using relatively simple programming techniques, and yet it rapidly facilitates and speeds up the annotation process, especially in languages with rich derivational morphology. As such, it can reduce the cost of the process. After introducing manual annotation tasks in derivational morphology, the paper describes the new visual interface and a case study that compares the current annotation method to the annotation using the interface. In addition, it also demonstrates the opportunity to use the interface for manual annotation of syntactic trees. The source codes are freely available under the MIT License on GitHub. | null | null | 10.18653/v1/2022.naacl-demo.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,181 |
inproceedings | alecakir-etal-2022-turkishdelightnlp | {T}urkish{D}elight{NLP}: A Neural {T}urkish {NLP} Toolkit | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.3/ | Alecakir, Huseyin and B{\"ol{\"uc{\"u, Necva and Can, Burcu | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 17--26 | We introduce a neural Turkish NLP toolkit called TurkishDelightNLP that performs computational linguistic analyses from morphological level to semantic level that involves tasks such as stemming, morphological segmentation, morphological tagging, part-of-speech tagging, dependency parsing, and semantic parsing, as well as high-level NLP tasks such as named entity recognition. We publicly share the open-source Turkish NLP toolkit through a web interface that allows an input text to be analysed in real-time, as well as the open source implementation of the components provided in the toolkit, an API, and several annotated datasets such as word similarity test set to evaluate word embeddings and UCCA-based semantic annotation in Turkish. This will be the first open-source Turkish NLP toolkit that involves a range of NLP tasks in all levels. We believe that it will be useful for other researchers in Turkish NLP and will be also beneficial for other high-level NLP tasks in Turkish. | null | null | 10.18653/v1/2022.naacl-demo.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,182 |
inproceedings | sainz-etal-2022-zs4ie | {ZS}4{IE}: A toolkit for Zero-Shot Information Extraction with simple Verbalizations | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.4/ | Sainz, Oscar and Qiu, Haoling and Lopez de Lacalle, Oier and Agirre, Eneko and Min, Bonan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 27--38 | The current workflow for Information Extraction (IE) analysts involves the definition of the entities/relations of interest and a training corpus with annotated examples. In this demonstration we introduce a new workflow where the analyst directly verbalizes the entities/relations, which are then used by a Textual Entailment model to perform zero-shot IE. We present the design and implementation of a toolkit with a user interface, as well as experiments on four IE tasks that show that the system achieves very good performance at zero-shot learning using only 5{--}15 minutes per type of a user`s effort. Our demonstration system is open-sourced at \url{https://github.com/BBN-E/ZS4IE}. A demonstration video is available at \url{https://vimeo.com/676138340}. | null | null | 10.18653/v1/2022.naacl-demo.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,183 |
inproceedings | pichl-etal-2022-flowstorm | Flowstorm: Open-Source Platform with Hybrid Dialogue Architecture | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.5/ | Pichl, Jan and Marek, Petr and Konr{\'a}d, Jakub and Lorenc, Petr and Kobza, Ondrej and Zaj{\'i}{\v{c}}ek, Tom{\'a}{\v{s}} and {\v{S}}ediv{\'y}, Jan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 39--45 | This paper presents a conversational AI platform called Flowstorm. Flowstorm is an open-source SaaS project suitable for creating, running, and analyzing conversational applications. Thanks to the fast and fully automated build process, the dialogues created within the platform can be executed in seconds. Furthermore, we propose a novel dialogue architecture that uses a combination of tree structures with generative models. The tree structures are also used for training NLU models suitable for specific dialogue scenarios. However, the generative models are globally used across applications and extend the functionality of the dialogue trees. Moreover, the platform functionality benefits from out-of-the-box components, such as the one responsible for extracting data from utterances or working with crawled data. Additionally, it can be extended using a custom code directly in the platform. One of the essential features of the platform is the possibility to reuse the created assets across applications. There is a library of prepared assets where each developer can contribute. All of the features are available through a user-friendly visual editor. | null | null | 10.18653/v1/2022.naacl-demo.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,184 |
inproceedings | malandri-etal-2022-contrastive | Contrastive Explanations of Text Classifiers as a Service | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.6/ | Malandri, Lorenzo and Mercorio, Fabio and Mezzanzanica, Mario and Nobani, Navid and Seveso, Andrea | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 46--53 | The recent growth of black-box machine-learning methods in data analysis has increased the demand for explanation methods and tools to understand their behaviour and assist human-ML model cooperation. In this paper, we demonstrate ContrXT, a novel approach that uses natural language explanations to help users to comprehend how a back-box model works. ContrXT provides time contrastive (t-contrast) explanations by computing the differences in the classification logic of two different trained models and then reasoning on their symbolic representations through Binary Decision Diagrams. ContrXT is publicly available at ContrXT.ai as a python pip package. | null | null | 10.18653/v1/2022.naacl-demo.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,185 |
inproceedings | du-etal-2022-resin | {RESIN}-11: Schema-guided Event Prediction for 11 Newsworthy Scenarios | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.7/ | Du, Xinya and Zhang, Zixuan and Li, Sha and Yu, Pengfei and Wang, Hongwei and Lai, Tuan and Lin, Xudong and Wang, Ziqi and Liu, Iris and Zhou, Ben and Wen, Haoyang and Li, Manling and Hannan, Darryl and Lei, Jie and Kim, Hyounghun and Dror, Rotem and Wang, Haoyu and Regan, Michael and Zeng, Qi and Lyu, Qing and Yu, Charles and Edwards, Carl and Jin, Xiaomeng and Jiao, Yizhu and Kazeminejad, Ghazaleh and Wang, Zhenhailong and Callison-Burch, Chris and Bansal, Mohit and Vondrick, Carl and Han, Jiawei and Roth, Dan and Chang, Shih-Fu and Palmer, Martha and Ji, Heng | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 54--63 | We introduce RESIN-11, a new schema-guided event extraction{\&}prediction framework that can be applied to a large variety of newsworthy scenarios. The framework consists of two parts: (1) an open-domain end-to-end multimedia multilingual information extraction system with weak-supervision and zero-shot learningbased techniques. (2) schema matching and schema-guided event prediction based on our curated schema library. We build a demo website based on our dockerized system and schema library publicly available for installation (\url{https://github.com/RESIN-KAIROS/RESIN-11}). We also include a video demonstrating the system. | null | null | 10.18653/v1/2022.naacl-demo.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,186 |
inproceedings | vacareanu-etal-2022-human | A Human-machine Interface for Few-shot Rule Synthesis for Information Extraction | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.8/ | Vacareanu, Robert and Barbosa, George C.G. and Noriega-Atala, Enrique and Hahn-Powell, Gus and Sharp, Rebecca and Valenzuela-Esc{\'a}rcega, Marco A. and Surdeanu, Mihai | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 64--70 | We propose a system that assists a user in constructing transparent information extraction models, consisting of patterns (or rules) written in a declarative language, through program synthesis. Users of our system can specify their requirements through the use of examples,which are collected with a search interface. The rule-synthesis system proposes rule candidates and the results of applying them on a textual corpus; the user has the option to accept the candidate, request another option, or adjust the examples provided to the system. Through an interactive evaluation, we show that our approach generates high-precision rules even in a 1-shot setting. On a second evaluation on a widely-used relation extraction dataset (TACRED), our method generates rules that outperform considerably manually written patterns. Our code, demo, and documentation is available at \url{https://clulab.github.io/odinsynth}. | null | null | 10.18653/v1/2022.naacl-demo.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,187 |
inproceedings | hu-etal-2022-setsum | {SETS}um: Summarization and Visualization of Student Evaluations of Teaching | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.9/ | Hu, Yinuo and Zhang, Shiyue and Sathy, Viji and Panter, Abigail and Bansal, Mohit | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 71--89 | Student Evaluations of Teaching (SETs) are widely used in colleges and universities. Typically SET results are summarized for instructors in a static PDF report. The report often includes summary statistics for quantitative ratings and an unsorted list of open-ended student comments. The lack of organization and summarization of the raw comments hinders those interpreting the reports from fully utilizing informative feedback, making accurate inferences, and designing appropriate instructional improvements. In this work, we introduce a novel system, SETSUM, that leverages sentiment analysis, aspect extraction, summarization, and visualization techniques to provide organized illustrations of SET findings to instructors and other reviewers. Ten university professors from diverse departments serve as evaluators of the system and all agree that SETSUM help them interpret SET results more efficiently; and 6 out of 10 instructors prefer our system over the standard static PDF report (while the remaining 4 would like to have both). This demonstrates that our work holds the potential of reforming the SET reporting conventions in the future. | null | null | 10.18653/v1/2022.naacl-demo.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,188 |
inproceedings | ding-etal-2022-towards-open | Towards Open-Domain Topic Classification | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.10/ | Ding, Hantian and Yang, Jinrui and Deng, Yuqian and Zhang, Hongming and Roth, Dan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 90--98 | We introduce an open-domain topic classification system that accepts user-defined taxonomy in real time. Users will be able to classify a text snippet with respect to any candidate labels they want, and get instant response from our web interface. To obtain such flexibility, we build the backend model in a zero-shot way. By training on a new dataset constructed from Wikipedia, our label-aware text classifier can effectively utilize implicit knowledge in the pretrained language model to handle labels it has never seen before. We evaluate our model across four datasets from various domains with different label sets. Experiments show that the model significantly improves over existing zero-shot baselines in open-domain scenarios, and performs competitively with weakly-supervised models trained on in-domain data. | null | null | 10.18653/v1/2022.naacl-demo.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,189 |
inproceedings | tuckute-etal-2022-sentspace | {S}ent{S}pace: Large-Scale Benchmarking and Evaluation of Text using Cognitively Motivated Lexical, Syntactic, and Semantic Features | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.11/ | Tuckute, Greta and Sathe, Aalok and Wang, Mingye and Yoder, Harley and Shain, Cory and Fedorenko, Evelina | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 99--113 | SentSpace is a modular framework for streamlined evaluation of text. SentSpacecharacterizes textual input using diverse lexical, syntactic, and semantic features derivedfrom corpora and psycholinguistic experiments. Core sentence features fall into three primaryfeature spaces: 1) Lexical, 2) Contextual, and 3) Embeddings. To aid in the analysis of computed features, SentSpace provides a web interface for interactive visualization and comparison with text from large corpora. The modular design of SentSpace allows researchersto easily integrate their own feature computation into the pipeline while benefiting from acommon framework for evaluation and visualization. In this manuscript we will describe thedesign of SentSpace, its core feature spaces, and demonstrate an example use case by comparing human-written and machine-generated (GPT2-XL) sentences to each other. We findthat while GPT2-XL-generated text appears fluent at the surface level, psycholinguistic normsand measures of syntactic processing reveal key differences between text produced by humansand machines. Thus, SentSpace provides a broad set of cognitively motivated linguisticfeatures for evaluation of text within natural language processing, cognitive science, as wellas the social sciences. | null | null | 10.18653/v1/2022.naacl-demo.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,190 |
inproceedings | zhang-etal-2022-paddlespeech | {P}addle{S}peech: An Easy-to-Use All-in-One Speech Toolkit | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.12/ | Zhang, Hui and Yuan, Tian and Chen, Junkun and Li, Xintong and Zheng, Renjie and Huang, Yuxin and Chen, Xiaojie and Gong, Enlei and Chen, Zeyu and Hu, Xiaoguang and Yu, Dianhai and Ma, Yanjun and Huang, Liang | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 114--123 | PaddleSpeech is an open-source all-in-one speech toolkit. It aims at facilitating the development and research of speech processing technologies by providing an easy-to-use command-line interface and a simple code structure. This paper describes the design philosophy and core architecture of PaddleSpeech to support several essential speech-to-text and text-to-speech tasks. PaddleSpeech achieves competitive or state-of-the-art performance on various speech datasets and implements the most popular methods. It also provides recipes and pretrained models to quickly reproduce the experimental results in this paper. PaddleSpeech is publicly avaiable at \url{https://github.com/PaddlePaddle/PaddleSpeech}. | null | null | 10.18653/v1/2022.naacl-demo.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,191 |
inproceedings | etezadi-etal-2022-dadmatools | {D}adma{T}ools: Natural Language Processing Toolkit for {P}ersian Language | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.13/ | Etezadi, Romina and Karrabi, Mohammad and Zare, Najmeh and Sajadi, Mohamad Bagher and Pilehvar, Mohammad Taher | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 124--130 | We introduce DadmaTools, an open-source Python Natural Language Processing toolkit for the Persian language. The toolkit is a neural pipeline based on spaCy for several text processing tasks, including normalization, tokenization, lemmatization, part-of-speech, dependency parsing, constituency parsing, chunking, and ezafe detecting. DadmaTools relies on fine-tuning of ParsBERT using the PerDT dataset for most of the tasks. Dataset module and embedding module are included in DadmaTools that support different Persian datasets, embeddings, and commonly used functions for them. Our evaluations show that DadmaTools can attain state-of-the-art performance on multiple NLP tasks. The source code is freely available at \url{https://github.com/Dadmatech/DadmaTools}. | null | null | 10.18653/v1/2022.naacl-demo.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,192 |
inproceedings | nguyen-etal-2022-famie | {FAMIE}: A Fast Active Learning Framework for Multilingual Information Extraction | Hajishirzi, Hannaneh and Ning, Qiang and Sil, Avi | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-demo.14/ | Nguyen, Minh Van and Ngo, Nghia and Min, Bonan and Nguyen, Thien | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations | 131--139 | This paper presents FAMIE, a comprehensive and efficient active learning (AL) toolkit for multilingual information extraction. FAMIE is designed to address a fundamental problem in existing AL frameworks where annotators need to wait for a long time between annotation batches due to the time-consuming nature of model training and data selection at each AL iteration. This hinders the engagement, productivity, and efficiency of annotators. Based on the idea of using a small proxy network for fast data selection, we introduce a novel knowledge distillation mechanism to synchronize the proxy network with the main large model (i.e., BERT-based) to ensure the appropriateness of the selected annotation examples for the main model. Our AL framework can support multiple languages. The experiments demonstrate the advantages of FAMIE in terms of competitive performance and time efficiency for sequence labeling with AL. We publicly release our code (\url{https://github.com/nlp-uoregon/famie}) and demo website (\url{http://nlp.uoregon.edu:9000/}). A demo video for FAMIE is provided at: \url{https://youtu.be/I2i8n_jAyrY} | null | null | 10.18653/v1/2022.naacl-demo.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,193 |
inproceedings | malmi-etal-2022-text | Text Generation with Text-Editing Models | Ballesteros, Miguel and Tsvetkov, Yulia and Alm, Cecilia O. | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-tutorials.1/ | Malmi, Eric and Dong, Yue and Mallinson, Jonathan and Chuklin, Aleksandr and Adamek, Jakub and Mirylenka, Daniil and Stahlberg, Felix and Krause, Sebastian and Kumar, Shankar and Severyn, Aliaksei | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts | 1--7 | Text-editing models have recently become a prominent alternative to seq2seq models for monolingual text-generation tasks such as grammatical error correction, text simplification, and style transfer. These tasks share a common trait {--} they exhibit a large amount of textual overlap between the source and target texts. Text-editing models take advantage of this observation and learn to generate the output by predicting edit operations applied to the source sequence. In contrast, seq2seq models generate outputs word-by-word from scratch thus making them slow at inference time. Text-editing models provide several benefits over seq2seq models including faster inference speed, higher sample efficiency, and better control and interpretability of the outputs. This tutorial provides a comprehensive overview of the text-edit based models and current state-of-the-art approaches analyzing their pros and cons. We discuss challenges related to deployment and how these models help to mitigate hallucination and bias, both pressing challenges in the field of text generation. | null | null | 10.18653/v1/2022.naacl-tutorials.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,195 |
inproceedings | lee-etal-2022-self | Self-supervised Representation Learning for Speech Processing | Ballesteros, Miguel and Tsvetkov, Yulia and Alm, Cecilia O. | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-tutorials.2/ | Lee, Hung-yi and Mohamed, Abdelrahman and Watanabe, Shinji and Sainath, Tara and Livescu, Karen and Li, Shang-Wen and Yang, Shu-wen and Kirchhoff, Katrin | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts | 8--13 | There is a trend in the machine learning community to adopt self-supervised approaches to pre-train deep networks. Self-supervised representation learning (SSL) utilizes proxy supervised learning tasks, for example, distinguishing parts of the input signal from distractors, or generating masked input segments conditioned on the unmasked ones, to obtain training data from unlabeled corpora. BERT and GPT in NLP and SimCLR and BYOL in CV are famous examples in this direction. These approaches make it possible to use a tremendous amount of unlabeled data available on the web to train large networks and solve complicated tasks. Thus, SSL has the potential to scale up current machine learning technologies, especially for low-resourced, under-represented use cases, and democratize the technologies. Recently self-supervised approaches for speech processing are also gaining popularity. There are several workshops in relevant topics hosted at ICML 2020 (\url{https://icml-sas.gitlab.io/}), NeurIPS 2020 (\url{https://neurips-sas-2020.github.io/}), and AAAI 2022 (\url{https://aaai-sas-2022.github.io/}). However, there is no previous tutorial about a similar topic based on the authors' best knowledge. Due to the growing popularity of SSL, and the shared mission of the areas in bringing speech and language technologies to more use cases with better quality and scaling the technologies for under-represented languages, we propose this tutorial to systematically survey the latest SSL techniques, tools, datasets, and performance achievement in speech processing. The proposed tutorial is highly relevant to the special theme of ACL about language diversity. One of the main focuses of the tutorial is leveraging SSL to reduce the dependence of speech technologies on labeled data, and to scale up the technologies especially for under-represented languages and use cases. | null | null | 10.18653/v1/2022.naacl-tutorials.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,196 |
inproceedings | chen-etal-2022-new | New Frontiers of Information Extraction | Ballesteros, Miguel and Tsvetkov, Yulia and Alm, Cecilia O. | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-tutorials.3/ | Chen, Muhao and Huang, Lifu and Li, Manling and Zhou, Ben and Ji, Heng and Roth, Dan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts | 14--25 | This tutorial targets researchers and practitioners who are interested in AI and ML technologies for structural information extraction (IE) from unstructured textual sources. Particularly, this tutorial will provide audience with a systematic introduction to recent advances of IE, by answering several important research questions. These questions include (i) how to develop an robust IE system from noisy, insufficient training data, while ensuring the reliability of its prediction? (ii) how to foster the generalizability of IE through enhancing the system`s cross-lingual, cross-domain, cross-task and cross-modal transferability? (iii) how to precisely support extracting structural information with extremely fine-grained, diverse and boundless labels? (iv) how to further improve IE by leveraging indirect supervision from other NLP tasks, such as NLI, QA or summarization, and pre-trained language models? (v) how to acquire knowledge to guide the inference of IE systems? We will discuss several lines of frontier research that tackle those challenges, and will conclude the tutorial by outlining directions for further investigation. | null | null | 10.18653/v1/2022.naacl-tutorials.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,197 |
inproceedings | boyd-graber-etal-2022-human | Human-Centered Evaluation of Explanations | Ballesteros, Miguel and Tsvetkov, Yulia and Alm, Cecilia O. | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-tutorials.4/ | Boyd-Graber, Jordan and Carton, Samuel and Feng, Shi and Liao, Q. Vera and Lombrozo, Tania and Smith-Renner, Alison and Tan, Chenhao | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts | 26--32 | The NLP community are increasingly interested in providing explanations for NLP models to help people make sense of model behavior and potentially improve human interaction with models. In addition to computational challenges in generating these explanations, evaluations of the generated explanations require human-centered perspectives and approaches. This tutorial will provide an overview of human-centered evaluations of explanations. First, we will give a brief introduction to the psychological foundation of explanations as well as types of NLP model explanations and their corresponding presentation, to provide the necessary background. We will then present a taxonomy of human-centered evaluation of explanations and dive into depth in the two categories: 1) evaluation based on human-annotated explanations; 2) evaluation with human-subjects studies. We will conclude by discussing future directions. We will also adopt a flipped format to maximize the in- teractive components for the live audience. | null | null | 10.18653/v1/2022.naacl-tutorials.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,198 |
inproceedings | morency-etal-2022-tutorial | Tutorial on Multimodal Machine Learning | Ballesteros, Miguel and Tsvetkov, Yulia and Alm, Cecilia O. | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-tutorials.5/ | Morency, Louis-Philippe and Liang, Paul Pu and Zadeh, Amir | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts | 33--38 | Multimodal machine learning involves integrating and modeling information from multiple heterogeneous sources of data. It is a challenging yet crucial area with numerous real-world applications in multimedia, affective computing, robotics, finance, HCI, and healthcare. This tutorial, building upon a new edition of a survey paper on multimodal ML as well as previously-given tutorials and academic courses, will describe an updated taxonomy on multimodal machine learning synthesizing its core technical challenges and major directions for future research. | null | null | 10.18653/v1/2022.naacl-tutorials.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,199 |
inproceedings | zhang-etal-2022-contrastive-data | Contrastive Data and Learning for Natural Language Processing | Ballesteros, Miguel and Tsvetkov, Yulia and Alm, Cecilia O. | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-tutorials.6/ | Zhang, Rui and Ji, Yangfeng and Zhang, Yue and Passonneau, Rebecca J. | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorial Abstracts | 39--47 | Current NLP models heavily rely on effective representation learning algorithms. Contrastive learning is one such technique to learn an embedding space such that similar data sample pairs have close representations while dissimilar samples stay far apart from each other. It can be used in supervised or unsupervised settings using different loss functions to produce task-specific or general-purpose representations. While it has originally enabled the success for vision tasks, recent years have seen a growing number of publications in contrastive NLP. This first line of works not only delivers promising performance improvements in various NLP tasks, but also provides desired characteristics such as task-agnostic sentence representation, faithful text generation, data-efficient learning in zero-shot and few-shot settings, interpretability and explainability. In this tutorial, we aim to provide a gentle introduction to the fundamentals of contrastive learning approaches and the theory behind them. We then survey the benefits and the best practices of contrastive learning for various downstream NLP applications including Text Classification, Question Answering, Summarization, Text Generation, Interpretability and Explainability, Commonsense Knowledge and Reasoning, Vision-and-Language.This tutorial intends to help researchers in the NLP and computational linguistics community to understand this emerging topic and promote future research directions of using contrastive learning for NLP applications. | null | null | 10.18653/v1/2022.naacl-tutorials.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,200 |
inproceedings | kachuee-etal-2022-scalable | Scalable and Robust Self-Learning for Skill Routing in Large-Scale Conversational {AI} Systems | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.1/ | Kachuee, Mohammad and Nam, Jinseok and Ahuja, Sarthak and Won, Jin-Myung and Lee, Sungjin | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 1--8 | Skill routing is an important component in large-scale conversational systems. In contrast to traditional rule-based skill routing, state-of-the-art systems use a model-based approach to enable natural conversations. To provide supervision signal required to train such models, ideas such as human annotation, replication of a rule-based system, relabeling based on user paraphrases, and bandit-based learning were suggested. However, these approaches: (a) do not scale in terms of the number of skills and skill on-boarding, (b) require a very costly expert annotation/rule-design, (c) introduce risks in the user experience with each model update. In this paper, we present a scalable self-learning approach to explore routing alternatives without causing abrupt policy changes that break the user experience, learn from the user interaction, and incrementally improve the routing via frequent model refreshes. To enable such robust frequent model updates, we suggest a simple and effective approach that ensures controlled policy updates for individual domains, followed by an off-policy evaluation for making deployment decisions without any need for lengthy A/B experimentation. We conduct various offline and online A/B experiments on a commercial large-scale conversational system to demonstrate the effectiveness of the proposed method in real-world production settings. | null | null | 10.18653/v1/2022.naacl-industry.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,202 |
inproceedings | wei-etal-2022-creater | {CREATER}: {CTR}-driven Advertising Text Generation with Controlled Pre-Training and Contrastive Fine-Tuning | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.2/ | Wei, Penghui and Yang, Xuanhua and Liu, ShaoGuo and Wang, Liang and Zheng, Bo | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 9--17 | This paper focuses on automatically generating the text of an ad, and the goal is that the generated text can capture user interest for achieving higher click-through rate (CTR). We propose CREATER, a CTR-driven advertising text generation approach, to generate ad texts based on high-quality user reviews. To incorporate CTR objective, our model learns from online A/B test data with contrastive learning, which encourages the model to generate ad texts that obtain higher CTR. To make use of large-scale unpaired reviews, we design a customized self-supervised objective reducing the gap between pre-training and fine-tuning. Experiments on industrial datasets show that CREATER significantly outperforms current approaches. It has been deployed online in a leading advertising platform and brings uplift on core online metrics. | null | null | 10.18653/v1/2022.naacl-industry.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,203 |
inproceedings | uthus-etal-2022-augmenting | Augmenting Poetry Composition with {V}erse by {V}erse | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.3/ | Uthus, David and Voitovich, Maria and Mical, R.J. | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 18--26 | We describe Verse by Verse, our experiment in augmenting the creative process of writing poetry with an AI. We have created a group of AI poets, styled after various American classic poets, that are able to offer as suggestions generated lines of verse while a user is composing a poem. In this paper, we describe the underlying system to offer these suggestions. This includes a generative model, which is tasked with generating a large corpus of lines of verse offline and which are then stored in an index, and a dual-encoder model that is tasked with recommending the next possible set of verses from our index given the previous line of verse. | null | null | 10.18653/v1/2022.naacl-industry.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,204 |
inproceedings | petegrosso-etal-2022-ab | {AB}/{BA} analysis: A framework for estimating keyword spotting recall improvement while maintaining audio privacy | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.4/ | Petegrosso, Raphael and Baderdinnni, VasistaKrishna and Senechal, Thibaud and Bullough, Benjamin | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 27--36 | Evaluation of keyword spotting (KWS) systems that detect keywords in speech is a challenging task under realistic privacy constraints. The KWS is designed to only collect data when the keyword is present, limiting the availability of hard samples that may contain false negatives, and preventing direct estimation of model recall from production data. Alternatively, complementary data collected from other sources may not be fully representative of the real application. In this work, we propose an evaluation technique which we call AB/BA analysis. Our framework evaluates a candidate KWS model B against a baseline model A, using cross-dataset offline decoding for relative recall estimation, without requiring negative examples. Moreover, we propose a formulation with assumptions that allow estimation of relative false positive rate between models with low variance even when the number of false positives is small. Finally, we propose to leverage machine-generated soft labels, in a technique we call Semi-Supervised AB/BA analysis, that improves the analysis time, privacy, and cost. Experiments with both simulation and real data show that AB/BA analysis is successful at measuring recall improvement in conjunction with the trade-off in relative false positive rate. | null | null | 10.18653/v1/2022.naacl-industry.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,205 |
inproceedings | gaspers-etal-2022-temporal | Temporal Generalization for Spoken Language Understanding | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.5/ | Gaspers, Judith and Kumar, Anoop and Ver Steeg, Greg and Galstyan, Aram | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 37--44 | Spoken Language Understanding (SLU) models in industry applications are usually trained offline on historic data, but have to perform well on incoming user requests after deployment. Since the application data is not available at training time, this is formally similar to the domain generalization problem, where domains correspond to different temporal segments of the data, and the goal is to build a model that performs well on unseen domains, e.g., upcoming data. In this paper, we explore different strategies for achieving good temporal generalization, including instance weighting, temporal fine-tuning, learning temporal features and building a temporally-invariant model. Our results on data of large-scale SLU systems show that temporal information can be leveraged to improve temporal generalization for SLU models. | null | null | 10.18653/v1/2022.naacl-industry.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,206 |
inproceedings | asi-etal-2022-end | An End-to-End Dialogue Summarization System for Sales Calls | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.6/ | Asi, Abedelkadir and Wang, Song and Eisenstadt, Roy and Geckt, Dean and Kuper, Yarin and Mao, Yi and Ronen, Royi | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 45--53 | Summarizing sales calls is a routine task performed manually by salespeople. We present a production system which combines generative models fine-tuned for customer-agent setting, with a human-in-the-loop user experience for an interactive summary curation process. We address challenging aspects of dialogue summarization task in a real-world setting including long input dialogues, content validation, lack of labeled data and quality evaluation. We show how GPT-3 can be leveraged as an offline data labeler to handle training data scarcity and accommodate privacy constraints in an industrial setting. Experiments show significant improvements by our models in tackling the summarization and content validation tasks on public datasets. | null | null | 10.18653/v1/2022.naacl-industry.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,207 |
inproceedings | kumar-etal-2022-controlled | Controlled Data Generation via Insertion Operations for {NLU} | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.7/ | Kumar, Manoj and Merhav, Yuval and Khan, Haidar and Gupta, Rahul and Rumshisky, Anna and Hamza, Wael | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 54--61 | Use of synthetic data is rapidly emerging as a realistic alternative to manually annotating live traffic for industry-scale model building. Manual data annotation is slow, expensive and not preferred for meeting customer privacy expectations. Further, commercial natural language applications are required to support continuously evolving features as well as newly added experiences. To address these requirements, we propose a targeted synthetic data generation technique by inserting tokens into a given semantic signature. The generated data are used as additional training samples in the tasks of intent classification and named entity recognition. We evaluate on a real-world voice assistant dataset, and using only 33{\%} of the available training set, we achieve the same accuracy as training with all available data. Further, we analyze the effects of data generation across varied real-world applications and propose heuristics that improve the task performance further. | null | null | 10.18653/v1/2022.naacl-industry.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,208 |
inproceedings | li-etal-2022-easy | Easy and Efficient Transformer: Scalable Inference Solution For Large {NLP} Model | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.8/ | Li, Gongzheng and Xi, Yadong and Ding, Jingzhen and Wang, Duan and Luo, Ziyang and Zhang, Rongsheng and Liu, Bai and Fan, Changjie and Mao, Xiaoxi and Zhao, Zeng | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 62--68 | Recently, large-scale transformer-based models have been proven to be effective over various tasks across many domains. Nevertheless, applying them in industrial production requires tedious and heavy works to reduce inference costs. To fill such a gap, we introduce a scalable inference solution: \textbf{Easy and Efficient Transformer (EET)}, including a series of transformer inference optimization at the algorithm and implementation levels. First, we design highly optimized kernels for long inputs and large hidden sizes. Second, we propose a flexible CUDA memory manager to reduce the memory footprint when deploying a large model. Compared with the state-of-the-art transformer inference library (Faster Transformer v4.0), EET can achieve an average of 1.40-4.20x speedup on the transformer decoder layer with an A100 GPU. | null | null | 10.18653/v1/2022.naacl-industry.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,209 |
inproceedings | murakami-etal-2022-aspect | Aspect-based Analysis of Advertising Appeals for Search Engine Advertising | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.9/ | Murakami, Soichiro and Zhang, Peinan and Hoshino, Sho and Kamigaito, Hidetaka and Takamura, Hiroya and Okumura, Manabu | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 69--78 | Writing an ad text that attracts people and persuades them to click or act is essential for the success of search engine advertising. Therefore, ad creators must consider various aspects of advertising appeals (A$^3$) such as the price, product features, and quality. However, products and services exhibit unique effective A$^3$ for different industries. In this work, we focus on exploring the effective A$^3$ for different industries with the aim of assisting the ad creation process. To this end, we created a dataset of advertising appeals and used an existing model that detects various aspects for ad texts. Our experiments demonstrated {\%}through correlation analysis that different industries have their own effective A$^3$ and that the identification of the A$^3$ contributes to the estimation of advertising performance. | null | null | 10.18653/v1/2022.naacl-industry.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,210 |
inproceedings | zhao-etal-2022-self | Self-supervised Product Title Rewrite for Product Listing Ads | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.10/ | Zhao, Xue and Liu, Dayiheng and Ding, Junwei and Yao, Liang and Yan, Mahone and Wang, Huibo and Yao, Wenqing | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 79--85 | Product Listing Ads (PLAs) are primary online advertisements merchants pay to attract more customers. However, merchants prefer to stack various attributes to the title and neglect the fluency and information priority. These seller-created titles are not suitable for PLAs as they fail to highlight the core information in the visible part in PLAs titles. In this work, we present a title rewrite solution. Specifically, we train a self-supervised language model to generate high-quality titles in terms of fluency and information priority. Extensive offline test and real-world online test have demonstrated that our solution is effective in reducing the cost and gaining more profit as it lowers our CPC, CPB while improving CTR in the online test by a large amount. | null | null | 10.18653/v1/2022.naacl-industry.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,211 |
inproceedings | leung-tan-2022-efficient | Efficient Semi-supervised Consistency Training for Natural Language Understanding | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.11/ | Leung, George and Tan, Joshua | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 86--93 | Manually labeled training data is expensive, noisy, and often scarce, such as when developing new features or localizing existing features for a new region. In cases where labeled data is limited but unlabeled data is abundant, semi-supervised learning methods such as consistency training can be used to improve model performance, by training models to output consistent predictions between original and augmented versions of unlabeled data. In this work, we explore different data augmentation methods for consistency training (CT) on Natural Language Understanding (NLU) domain classification (DC) in the limited labeled data regime. We explore three types of augmentation techniques (human paraphrasing, back-translation, and dropout) for unlabeled data and train DC models to jointly minimize both the supervised loss and the consistency loss on unlabeled data. Our results demonstrate that DC models trained with CT methods and dropout based augmentation on only 0.1{\%} (2,998 instances) of labeled data with the remainder as unlabeled can achieve a top-1 relative accuracy reduction of 12.25{\%} compared to fully supervised model trained with 100{\%} of labeled data, outperforming fully supervised models trained on 10x that amount of labeled data. The dropout-based augmentation achieves similar performance compare to back-translation based augmentation with much less computational resources. This paves the way for applications of using large scale unlabeled data for semi-supervised learning in production NLU systems. | null | null | 10.18653/v1/2022.naacl-industry.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,212 |
inproceedings | sircar-etal-2022-distantly | Distantly Supervised Aspect Clustering And Naming For {E}-Commerce Reviews | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.12/ | Sircar, Prateek and Chakrabarti, Aniket and Gupta, Deepak and Majumdar, Anirban | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 94--102 | Product aspect extraction from reviews is a critical task for e-commerce services to understand customer preferences and pain points. While aspect phrases extraction and sentiment analysis have received a lot of attention, clustering of aspect phrases and assigning human readable names to clusters in e-commerce reviews is an extremely important and challenging problem due to the scale of the reviews that makes human review infeasible. In this paper, we propose fully automated methods for clustering aspect words and generating human readable names for the clusters without any manually labeled data. We train transformer based sentence embeddings that are aware of unique e-commerce language characteristics (eg. incomplete sentences, spelling and grammar errors, vernacular etc.). We also train transformer based sequence to sequence models to generate human readable aspect names from clusters. Both the models are trained using heuristic based distant supervision. Additionally, the models are used to improve each other. Extensive empirical testing showed that the clustering model improves the Silhouette Score by 64{\%} when compared to the state-of-the-art baseline and the aspect naming model achieves a high ROUGE-L score of 0.79. | null | null | 10.18653/v1/2022.naacl-industry.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,213 |
inproceedings | grishina-sorokin-2022-local | Local-to-global learning for iterative training of production {SLU} models on new features | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.13/ | Grishina, Yulia and Sorokin, Daniil | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 103--111 | In production SLU systems, new training data becomes available with time so that ML models need to be updated on a regular basis. Specifically, releasing new features adds new classes of data while the old data remains constant. However, retraining the full model each time from scratch is computationally expensive. To address this problem, we propose to consider production releases from the curriculum learning perspective and to adapt the local-to-global learning (LGL) schedule (Cheng et. al, 2019) for a statistical model that starts with fewer output classes and adds more classes with each iteration. We report experiments for the tasks of intent classification and slot filling in the context of a production voice-assistant. First, we apply the original LGL schedule on our data and then adapt LGL to the production setting where the full data is not available at initial training iterations. We demonstrate that our method improves model error rates by 7.3{\%} and saves up to 25{\%} training time for individual iterations. | null | null | 10.18653/v1/2022.naacl-industry.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,214 |
inproceedings | li-etal-2022-culg | {CULG}: Commercial Universal Language Generation | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.14/ | Li, Haonan and Huang, Yameng and Gong, Yeyun and Jiao, Jian and Zhang, Ruofei and Baldwin, Timothy and Duan, Nan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 112--120 | Pre-trained language models (PLMs) have dramatically improved performance for many natural language processing (NLP) tasks in domains such as finance and healthcare. However, the application of PLMs in the domain of commerce, especially marketing and advertising, remains less studied. In this work, we adapt pre-training methods to the domain of commerce, by proposing CULG, a large-scale commercial universal language generation model which is pre-trained on a corpus drawn from 10 markets across 7 languages. We propose 4 commercial generation tasks and a two-stage training strategy for pre-training, and demonstrate that the proposed strategy yields performance improvements on three generation tasks as compared to single-stage pre-training. Extensive experiments show that our model outperforms other models by a large margin on commercial generation tasks, and we conclude with a discussion on additional applications over other markets, languages, and tasks. | null | null | 10.18653/v1/2022.naacl-industry.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,215 |
inproceedings | gueudre-jose-2022-constraining | Constraining word alignments with posterior regularization for label transfer | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.15/ | Jose, Kevin and Gueudre, Thomas | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 121--129 | Unsupervised word alignments offer a lightweight and interpretable method to transfer labels from high- to low-resource languages, as long as semantically related words have the same label across languages. But such an assumption is often not true in industrial NLP pipelines, where multilingual annotation guidelines are complex and deviate from semantic consistency due to various factors (such as annotation difficulty, conflicting ontology, upcoming feature launches etc.);We address this difficulty by constraining the alignments models to remain consistent with both source and target annotation guidelines , leveraging posterior regularization and labeled examples. We illustrate the overall approach using IBM 2 (fast{\_}align) as a base model, and report results on both internal and external annotated datasets. We measure consistent accuracy improvements on the MultiATIS++ dataset over AWESoME, a popular transformer-based alignment model, in the label projection task ($+2.7\%$ at word-level and $+15\%$ at sentence-level), and show how even a small amount of target language annotations help substantially. | null | null | 10.18653/v1/2022.naacl-industry.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,216 |
inproceedings | sehanobish-etal-2022-explaining | Explaining the Effectiveness of Multi-Task Learning for Efficient Knowledge Extraction from Spine {MRI} Reports | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.16/ | Sehanobish, Arijit and Sandora, McCullen and Abraham, Nabila and Pawar, Jayashri and Torres, Danielle and Das, Anasuya and Becker, Murray and Herzog, Richard and Odry, Benjamin and Vianu, Ron | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 130--140 | Pretrained Transformer based models finetuned on domain specific corpora have changed the landscape of NLP. However, training or fine-tuning these models for individual tasks can be time consuming and resource intensive. Thus, a lot of current research is focused on using transformers for multi-task learning (Raffel et al., 2020) and how to group the tasks to help a multi-task model to learn effective representations that can be shared across tasks (Standley et al., 2020; Fifty et al., 2021) . In this work, we show that a single multi-tasking model can match the performance of task specific model when the task specific models show similar representations across all of their hidden layers and their gradients are aligned, i.e. their gradients follow the same direction. We hypothesize that the above observations explain the effectiveness of multi-task learning. We validate our observations on our internal radiologist-annotated datasets on the cervical and lumbar spine. Our method is simple and intuitive, and can be used in a wide range of NLP problems. | null | null | 10.18653/v1/2022.naacl-industry.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,217 |
inproceedings | khaziev-etal-2022-fpi | {FPI}: Failure Point Isolation in Large-scale Conversational Assistants | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.17/ | Khaziev, Rinat and Shahid, Usman and R{\"oding, Tobias and Chada, Rakesh and Kapanci, Emir and Natarajan, Pradeep | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 141--148 | Large-scale conversational assistants such as Cortana, Alexa, Google Assistant and Siri process requests through a series of modules for wake word detection, speech recognition, language understanding and response generation. An error in one of these modules can cascade through the system. Given the large traffic volumes in these assistants, it is infeasible to manually analyze the data, identify requests with processing errors and isolate the source of error. We present a machine learning system to address this challenge. First, we embed the incoming request and context, such as system response and subsequent turns, using pre-trained transformer models. Then, we combine these embeddings with encodings of additional metadata features (such as confidence scores from different modules in the online system) using a {\textquotedblleft}mixing-encoder{\textquotedblright} to output the failure point predictions. Our system obtains 92.2{\%} of human performance on this task while scaling to analyze the entire traffic in 8 different languages of a large-scale conversational assistant. We present detailed ablation studies analyzing the impact of different modeling choices. | null | null | 10.18653/v1/2022.naacl-industry.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,218 |
inproceedings | lu-etal-2022-asynchronous | Asynchronous Convergence in Multi-Task Learning via Knowledge Distillation from Converged Tasks | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.18/ | Lu, Weiyi and Rajagopalan, Sunny and Nigam, Priyanka and Singh, Jaspreet and Sun, Xiaodi and Xu, Yi and Zeng, Belinda and Chilimbi, Trishul | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 149--159 | Multi-task learning (MTL) aims to solve multiple tasks jointly by sharing a base representation among them. This can lead to more efficient learning and better generalization, as compared to learning each task individually. However, one issue that often arises in MTL is the convergence speed between tasks varies due to differences in task difficulty, so it can be a challenge to simultaneously achieve the best performance on all tasks with a single model checkpoint. Various techniques have been proposed to address discrepancies in task convergence rate, including weighting the per-task losses and modifying task gradients. In this work, we propose a novel approach that avoids the problem of requiring all tasks to converge at the same rate, but rather allows for {\textquotedblleft}asynchronous{\textquotedblright} convergence among the tasks where each task can converge on its own schedule. As our main contribution, we monitor per-task validation metrics and switch to a knowledge distillation loss once a task has converged instead of continuing to train on the true labels. This prevents the model from overfitting on converged tasks while it learns the remaining tasks. We evaluate the proposed method in two 5-task MTL setups consisting of internal e-commerce datasets. The results show that our method consistently outperforms existing loss weighting and gradient balancing approaches, achieving average improvements of 0.9{\%} and 1.5{\%} over the best performing baseline model in the two setups, respectively. | null | null | 10.18653/v1/2022.naacl-industry.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,219 |
inproceedings | joshi-etal-2022-augmenting | Augmenting Training Data for Massive Semantic Matching Models in Low-Traffic {E}-commerce Stores | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.19/ | Joshi, Ashutosh and Vishwanath, Shankar and Teo, Choon and Petricek, Vaclav and Vishwanathan, Vishy and Bhagat, Rahul and May, Jonathan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 160--167 | Extreme multi-label classification (XMC) systems have been successfully applied in e-commerce (Shen et al., 2020; Dahiya et al., 2021) for retrieving products based on customer behavior. Such systems require large amounts of customer behavior data (e.g. queries, clicks, purchases) for training. However, behavioral data is limited in low-traffic e-commerce stores, impacting performance of these systems. In this paper, we present a technique that augments behavioral training data via query reformulation. We use the Aggregated Label eXtreme Multi-label Classification (AL-XMC) system (Shen et al., 2020) as an example semantic matching model and show via crowd-sourced human judgments that, when the training data is augmented through query reformulations, the quality of AL-XMC improves over a baseline that does not use query reformulation. We also show in online A/B tests that our method significantly improves business metrics for the AL-XMC model. | null | null | 10.18653/v1/2022.naacl-industry.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,220 |
inproceedings | biswas-etal-2022-retrieval | Retrieval Based Response Letter Generation For a Customer Care Setting | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.20/ | Biswas, Biplob and Cui, Renhao and Ramnath, Rajiv | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 168--175 | Letter-like communications (such as email) are a major means of customer relationship management within customer-facing organizations. These communications are initiated on a channel by requests from customers and then responded to by the organization on the same channel. For decades, the job has almost entirely been conducted by human agents who attempt to provide the most appropriate reaction to the request. Rules have been made to standardize the overall customer service process and make sure the customers receive professional responses. Recent progress in natural language processing has made it possible to automate response generation. However, the diversity and open nature of customer queries and the lack of structured knowledge bases make this task even more challenging than typical task-oriented language generation tasks. Keeping those obstacles in mind, we propose a deep-learning based response letter generation framework that attempts to retrieve knowledge from historical responses and utilize it to generate an appropriate reply. Our model uses data augmentation to address the insufficiency of query-response pairs and employs a ranking mechanism to choose the best response from multiple potential options. We show that our technique outperforms the baselines by significant margins while producing consistent and informative responses. | null | null | 10.18653/v1/2022.naacl-industry.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,221 |
inproceedings | ziletti-etal-2022-medical | {M}edical Coding with Biomedical Transformer Ensembles and Zero/Few-shot Learning | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.21/ | Ziletti, Angelo and Akbik, Alan and Berns, Christoph and Herold, Thomas and Legler, Marion and Viell, Martina | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 176--187 | Medical coding (MC) is an essential pre-requisite for reliable data retrieval and reporting. Given a free-text \textit{reported term} (RT) such as {\textquotedblleft}pain of right thigh to the knee{\textquotedblright}, the task is to identify the matching \textit{lowest-level term} (LLT) {--}in this case {\textquotedblleft}unilateral leg pain{\textquotedblright}{--} from a very large and continuously growing repository of standardized medical terms. However, automating this task is challenging due to a large number of LLT codes (as of writing over $80\,000$), limited availability of training data for long tail/emerging classes, and the general high accuracy demands of the medical domain.With this paper, we introduce the MC task, discuss its challenges, and present a novel approach called xTARS that combines traditional BERT-based classification with a recent zero/few-shot learning approach (TARS). We present extensive experiments that show that our combined approach outperforms strong baselines, especially in the few-shot regime. The approach is developed and deployed at Bayer, live since November 2021. As we believe our approach potentially promising beyond MC, and to ensure reproducibility, we release the code to the research community. | null | null | 10.18653/v1/2022.naacl-industry.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,222 |
inproceedings | arnold-etal-2022-knowledge | Knowledge extraction from aeronautical messages ({NOTAM}s) with self-supervised language models for aircraft pilots | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.22/ | Arnold, Alexandre and Ernez, Fares and Kobus, Catherine and Martin, Marion-C{\'e}cile | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 188--196 | During their pre-flight briefings, aircraft pilots must analyse a long list of NoTAMs (NOtice To AirMen) indicating potential hazards along the flight route, sometimes up to pages for long-haul flights. NOTAM free-text fields typically have a very special phrasing, with lots of acronyms and domain-specific vocabulary, which makes it differ significantly from standard English. In this paper, we pretrain language models derived from BERT on circa 1 million unlabeled NOTAMs and reuse the learnt representations on three downstream tasks valuable for pilots: criticality prediction, named entity recognition and translation into a structured language called Airlang. This self-supervised approach, where smaller amounts of labeled data are enough for task-specific fine-tuning, is well suited in the aeronautical context since expert annotations are expensive and time-consuming. We present evaluation scores across the tasks showing a high potential for an operational usability of such models (by pilots, airlines or service providers), which is a first to the best of our knowledge. | null | null | 10.18653/v1/2022.naacl-industry.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,223 |
inproceedings | chen-etal-2022-intent | Intent Discovery for Enterprise Virtual Assistants: Applications of Utterance Embedding and Clustering to Intent Mining | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.23/ | Chen, Minhua and Jayakumar, Badrinath and Johnston, Michael and Mahmoodi, S. Eman and Pressel, Daniel | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 197--208 | A key challenge in the creation and refinement of virtual assistants is the ability to mine unlabeled utterance data to discover common intents. We develop an approach to this problem that combines large-scale pre-training and multi-task learning to derive a semantic embedding that can be leveraged to identify clusters of utterances that correspond to unhandled intents. An utterance encoder is first trained with a language modeling objective and subsequently adapted to predict intent labels from a large collection of cross-domain enterprise virtual assistant data using a multi-task cosine softmax loss. Experimental evaluation shows significant advantages for this multi-step pre-training approach, with large gains in downstream clustering accuracy on new applications compared to standard sentence embedding approaches. The approach has been incorporated into an interactive discovery tool that enables visualization and exploration of intents by system analysts and builders. | null | null | 10.18653/v1/2022.naacl-industry.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,224 |
inproceedings | ayoola-etal-2022-refined | {R}e{F}in{ED}: An Efficient Zero-shot-capable Approach to End-to-End Entity Linking | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.24/ | Ayoola, Tom and Tyagi, Shubhi and Fisher, Joseph and Christodoulopoulos, Christos and Pierleoni, Andrea | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 209--220 | We introduce ReFinED, an efficient end-to-end entity linking model which uses fine-grained entity types and entity descriptions to perform linking. The model performs mention detection, fine-grained entity typing, and entity disambiguation for all mentions within a document in a single forward pass, making it more than 60 times faster than competitive existing approaches. ReFinED also surpasses state-of-the-art performance on standard entity linking datasets by an average of 3.7 F1. The model is capable of generalising to large-scale knowledge bases such as Wikidata (which has 15 times more entities than Wikipedia) and of zero-shot entity linking. The combination of speed, accuracy and scale makes ReFinED an effective and cost-efficient system for extracting entities from web-scale datasets, for which the model has been successfully deployed. | null | null | 10.18653/v1/2022.naacl-industry.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,225 |
inproceedings | pressel-etal-2022-lightweight | Lightweight Transformers for Conversational {AI} | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.25/ | Pressel, Daniel and Liu, Wenshuo and Johnston, Michael and Chen, Minhua | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 221--229 | To understand how training on conversational language impacts performance of pre-trained models on downstream dialogue tasks, we build compact Transformer-based Language Models from scratch on several large corpora of conversational data. We compare the performance and characteristics of these models against BERT and other strong baselines on dialogue probing tasks. Commercial dialogue systems typically require a small footprint and fast execution time, but recent trends are in the other direction, with an ever-increasing number of parameters, resulting in difficulties in model deployment. We focus instead on training fast, lightweight models that excel at natural language understanding (NLU) and can replace existing lower-capacity conversational AI models with similar size and speed. In the process, we develop a simple but unique curriculum-based approach that moves from general-purpose to dialogue-targeted both in terms of data and objective. Our resultant models have around 1/3 the number of parameters of BERT-base and produce better representations for a wide array of intent detection datasets using linear and Mutual-Information probing techniques. Additionally, the models can be easily fine-tuned on a single consumer GPU card and deployed in near real-time production environments. | null | null | 10.18653/v1/2022.naacl-industry.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,226 |
inproceedings | shrimal-etal-2022-ner | {NER-MQMRC}: {F}ormulating Named Entity Recognition as Multi Question Machine Reading Comprehension | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.26/ | Shrimal, Anubhav and Jain, Avi and Mehta, Kartik and Yenigalla, Promod | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 230--238 | NER has been traditionally formulated as a sequence labeling task. However, there has been recent trend in posing NER as a machine reading comprehension task (Wang et al., 2020; Mengge et al., 2020), where entity name (or other information) is considered as a question, text as the context and entity value in text as answer snippet. These works consider MRC based on a single question (entity) at a time. We propose posing NER as a multi-question MRC task, where multiple questions (one question per entity) are considered at the same time for a single text. We propose a novel BERT-based multi-question MRC (NER-MQMRC) architecture for this formulation. NER-MQMRC architecture considers all entities as input to BERT for learning token embeddings with self-attention and leverages BERT-based entity representation for further improving these token embeddings for NER task. Evaluation on three NER datasets show that our proposed architecture leads to average 2.5 times faster training and 2.3 times faster inference as compared to NER-SQMRC framework based models by considering all entities together in a single pass. Further, we show that our model performance does not degrade compared to single-question based MRC (NER-SQMRC) (Devlin et al., 2019) leading to F1 gain of +0.41{\%}, +0.32{\%} and +0.27{\%} for AE-Pub, Ecommerce5PT and Twitter datasets respectively. We propose this architecture primarily to solve large scale e-commerce attribute (or entity) extraction from unstructured text of a magnitude of 50k+ attributes to be extracted on a scalable production environment with high performance and optimised training and inference runtimes. | null | null | 10.18653/v1/2022.naacl-industry.26 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,227 |
inproceedings | bhattacharjee-etal-2022-users | What Do Users Care About? Detecting Actionable Insights from User Feedback | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.27/ | Bhattacharjee, Kasturi and Gangadharaiah, Rashmi and McKeown, Kathleen and Roth, Dan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 239--246 | Users often leave feedback on a myriad of aspects of a product which, if leveraged successfully, can help yield useful insights that can lead to further improvements down the line. Detecting actionable insights can be challenging owing to large amounts of data as well as the absence of labels in real-world scenarios. In this work, we present an aggregation and graph-based ranking strategy for unsupervised detection of these insights from real-world, noisy, user-generated feedback. Our proposed approach significantly outperforms strong baselines on two real-world user feedback datasets and one academic dataset. | null | null | 10.18653/v1/2022.naacl-industry.27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,228 |
inproceedings | kulkarni-etal-2022-ctm | {CTM} - A Model for Large-Scale Multi-View Tweet Topic Classification | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.28/ | Kulkarni, Vivek and Leung, Kenny and Haghighi, Aria | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 247--258 | Automatically associating social media posts with topics is an important prerequisite for effective search and recommendation on many social media platforms. However, topic classification of such posts is quite challenging because of (a) a large topic space (b) short text with weak topical cues, and (c) multiple topic associations per post. In contrast to most prior work which only focuses on post-classification into a small number of topics ($10-20$), we consider the task of large-scale topic classification in the context of Twitter where the topic space is 10 times larger with potentially multiple topic associations per Tweet. We address the challenges above and propose a novel neural model, that (a) supports a large topic space of 300 topics (b) takes a holistic approach to tweet content modeling {--} leveraging multi-modal content, author context, and deeper semantic cues in the Tweet. Our method offers an effective way to classify Tweets into topics at scale by yielding superior performance to other approaches (a relative lift of $\mathbf{20}\%$ in median average precision score) and has been successfully deployed in production at Twitter. | null | null | 10.18653/v1/2022.naacl-industry.28 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,229 |
inproceedings | khasanova-etal-2022-developing | Developing a Production System for {P}urpose of {C}all Detection in Business Phone Conversations | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.29/ | Khasanova, Elena and Hiranandani, Pooja and Gardiner, Shayna and Chen, Cheng and Corston-Oliver, Simon and Fu, Xue-Yong | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 259--267 | For agents at a contact centre receiving calls, the most important piece of information is the reason for a given call. An agent cannot provide support on a call if they do not know why a customer is calling. In this paper we describe our implementation of a commercial system to detect Purpose of Call statements in English business call transcripts in real time. We present a detailed analysis of types of Purpose of Call statements and language patterns related to them, discuss an approach to collect rich training data by bootstrapping from a set of rules to a neural model, and describe a hybrid model which consists of a transformer-based classifier and a set of rules by leveraging insights from the analysis of call transcripts. The model achieved 88.6 F1 on average in various types of business calls when tested on real life data and has low inference time. We reflect on the challenges and design decisions when developing and deploying the system. | null | null | 10.18653/v1/2022.naacl-industry.29 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,230 |
inproceedings | bitton-etal-2022-adversarial | Adversarial Text Normalization | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.30/ | Bitton, Joanna and Pavlova, Maya and Evtimov, Ivan | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 268--279 | Text-based adversarial attacks are becoming more commonplace and accessible to general internet users. As these attacks proliferate, the need to address the gap in model robustness becomes imminent. While retraining on adversarial data may increase performance, there remains an additional class of character-level attacks on which these models falter. Additionally, the process to retrain a model is time and resource intensive, creating a need for a lightweight, reusable defense. In this work, we propose the Adversarial Text Normalizer, a novel method that restores baseline performance on attacked content with low computational overhead. We evaluate the efficacy of the normalizer on two problem areas prone to adversarial attacks, i.e. Hate Speech and Natural Language Inference. We find that text normalization provides a task-agnostic defense against character-level attacks that can be implemented supplementary to adversarial retraining solutions, which are more suited for semantic alterations. | null | null | 10.18653/v1/2022.naacl-industry.30 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,231 |
inproceedings | mitra-etal-2022-constraint | Constraint-based Multi-hop Question Answering with Knowledge Graph | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.31/ | Mitra, Sayantan and Ramnani, Roshni and Sengupta, Shubhashis | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 280--288 | The objective of a Question-Answering system over Knowledge Graph (KGQA) is to respond to natural language queries presented over the KG. A complex question answering system typically addresses one of the two categories of complexity: questions with constraints and questions involving multiple hops of relations. Most of the previous works have addressed these complexities separately. Multi-hop KGQA necessitates reasoning across numerous edges of the KG in order to arrive at the correct answer. Because KGs are frequently sparse, multi-hop KGQA presents extra complications. Recent works have developed KG embedding approaches to reduce KG sparsity by performing missing link prediction. In this paper, we tried to address multi-hop constrained-based queries using KG embeddings to generate more flexible query graphs. Empirical results indicate that the proposed methodology produces state-of-the-art outcomes on three KGQA datasets. | null | null | 10.18653/v1/2022.naacl-industry.31 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,232 |
inproceedings | kim-etal-2022-fast | Fast Bilingual Grapheme-To-Phoneme Conversion | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.32/ | Kim, Hwa-Yeon and Kim, Jong-Hwan and Kim, Jae-Min | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 289--296 | Autoregressive transformer (ART)-based grapheme-to-phoneme (G2P) models have been proposed for bi/multilingual text-to-speech systems. Although they have achieved great success, they suffer from high inference latency in real-time industrial applications, especially processing long sentence. In this paper, we propose a fast and high-performance bilingual G2P model. For fast and exact decoding, we used a non-autoregressive structured transformer-based architecture and data augmentation for predicting output length. Our model achieved better performance than that of the previous autoregressive model and about 2700{\%} faster inference speed. | null | null | 10.18653/v1/2022.naacl-industry.32 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,233 |
inproceedings | shimorina-etal-2022-knowledge | Knowledge Extraction From Texts Based on {W}ikidata | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.33/ | Shimorina, Anastasia and Heinecke, Johannes and Herledan, Fr{\'e}d{\'e}ric | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 297--304 | This paper presents an effort within our company of developing knowledge extraction pipeline for English, which can be further used for constructing an entreprise-specific knowledge base. We present a system consisting of entity detection and linking, coreference resolution, and relation extraction based on the Wikidata schema. We highlight existing challenges of knowledge extraction by evaluating the deployed pipeline on real-world data. We also make available a database, which can serve as a new resource for sentential relation extraction, and we underline the importance of having balanced data for training classification models. | null | null | 10.18653/v1/2022.naacl-industry.33 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,234 |
inproceedings | katsis-etal-2022-ait | {AIT-QA}: {Q}uestion Answering Dataset over Complex Tables in the Airline Industry | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.34/ | Katsis, Yannis and Chemmengath, Saneem and Kumar, Vishwajeet and Bharadwaj, Samarth and Canim, Mustafa and Glass, Michael and Gliozzo, Alfio and Pan, Feifei and Sen, Jaydeep and Sankaranarayanan, Karthik and Chakrabarti, Soumen | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 305--314 | Table Question Answering (Table QA) systems have been shown to be highly accurate when trained and tested on open-domain datasets built on top of Wikipedia tables. However, it is not clear whether their performance remains the same when applied to domain-specific scientific and business documents, encountered in industrial settings, which exhibit some unique characteristics: (a) they contain tables with a much more complex layout than Wikipedia tables (including hierarchical row and column headers), (b) they contain domain-specific terms, and (c) they are typically not accompanied by domain-specific labeled data that can be used to train Table QA models. To understand the performance of Table QA approaches in this setting, we introduce AIT-QA; a domain-specific Table QA test dataset. While focusing on the airline industry, AIT-QA reflects the challenges that domain-specific documents pose to Table QA, outlined above. In this work, we describe the creation of the dataset and report zero-shot experimental results of three SOTA Table QA methods. The results clearly expose the limitations of current methods with a best accuracy of just 51.8{\%}. We also present pragmatic table pre-processing steps to pivot and project complex tables into a layout suitable for the SOTA Table QA models. Finally, we provide data-driven insights on how different aspects of this setting (including hierarchical headers, domain-specific terminology, and paraphrasing) affect Table QA methods, in order to help the community develop improved methods for domain-specific Table QA. | null | null | 10.18653/v1/2022.naacl-industry.34 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,235 |
inproceedings | zhu-etal-2022-parameter | Parameter-efficient Continual Learning Framework in Industrial Real-time Text Classification System | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.35/ | Zhu, Tao and Zhao, Zhe and Liu, Weijie and Liu, Jiachi and Chen, Yiren and Mao, Weiquan and Liu, Haoyan and Ding, Kunbo and Li, Yudong and Yang, Xuefeng | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 315--323 | Catastrophic forgetting is a challenge for model deployment in industrial real-time systems, which requires the model to quickly master a new task without forgetting the old one. Continual learning aims to solve this problem; however, it usually updates all the model parameters, resulting in extensive training times and the inability to deploy quickly. To address this challenge, we propose a parameter-efficient continual learning framework, in which efficient parameters are selected through an offline parameter selection strategy and then trained using an online regularization method. In our framework, only a few parameters need to be updated, which not only alleviates catastrophic forgetting, but also allows the model to be saved with the changed parameters instead of all parameters. Extensive experiments are conducted to examine the effectiveness of our proposal. We believe this paper will provide useful insights and experiences on developing deep learning-based online real-time systems. | null | null | 10.18653/v1/2022.naacl-industry.35 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,236 |
inproceedings | ponnusamy-etal-2022-self | Self-Aware Feedback-Based Self-Learning in Large-Scale Conversational {AI} | Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan | jul | 2022 | Hybrid: Seattle, Washington + Online | Association for Computational Linguistics | https://aclanthology.org/2022.naacl-industry.36/ | Ponnusamy, Pragaash and Mathialagan, Clint Solomon and Aguilar, Gustavo and Ma, Chengyuan and Guo, Chenlei | Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track | 324--333 | Self-learning paradigms in large-scale conversational AI agents tend to leverage user feedback in bridging between what they say and what they mean. However, such learning, particularly in Markov-based query rewriting systems have far from addressed the impact of these models on future training where successive feedback is inevitably contingent on the rewrite itself, especially in a continually updating environment. In this paper, we explore the consequences of this inherent lack of self-awareness towards impairing the model performance, ultimately resulting in both Type I and II errors over time. To that end, we propose augmenting the Markov Graph construction with a superposition-based adjacency matrix. Here, our method leverages an induced stochasticity to reactively learn a locally-adaptive decision boundary based on the performance of the individual rewrites in a bi-variate beta setting. We also surface a data augmentation strategy that leverages template-based generation in abridging complex conversation hierarchies of dialogs so as to simplify the learning process. All in all, we demonstrate that our self-aware model improves the overall PR-AUC by 27.45{\%}, achieves a relative defect reduction of up to 31.22{\%}, and is able to adapt quicker to changes in global preferences across a large number of customers. | null | null | 10.18653/v1/2022.naacl-industry.36 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 24,237 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.