entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
andrews-etal-2022-stopes
stopes - Modular Machine Translation Pipelines
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.26/
Andrews, Pierre and Wenzek, Guillaume and Heffernan, Kevin and {\c{C}}elebi, Onur and Sun, Anna and Kamran, Ammar and Guo, Yingzhe and Mourachko, Alexandre and Schwenk, Holger and Fan, Angela
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
258--265
Neural machine translation, as other natural language deep learning applications, is hungry for data. As research evolves, the data pipelines supporting that research evolve too, oftentimes re-implementing the same core components. Despite the potential of modular codebases, researchers have but little time to put code structure and reusability first. Unfortunately, this makes it very hard to publish clean, reproducible code to benefit a wider audience. In this paper, we motivate and describe stopes , a framework that addresses these issues while empowering scalability and versatility for research use cases. This library was a key enabler of the No Language Left Behind project, establishing new state of the art performance for a multilingual machine translation model covering 200 languages. stopes and the pipelines described are released under the MIT license at \url{https://github.com/facebookresearch/stopes}.
null
null
10.18653/v1/2022.emnlp-demos.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,992
inproceedings
gehrmann-etal-2022-gemv2
{GEM}v2: Multilingual {NLG} Benchmarking in a Single Line of Code
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.27/
Gehrmann, Sebastian and Bhattacharjee, Abhik and Mahendiran, Abinaya and Wang, Alex and Papangelis, Alexandros and Madaan, Aman and Mcmillan-major, Angelina and Shvets, Anna and Upadhyay, Ashish and Bohnet, Bernd and Yao, Bingsheng and Wilie, Bryan and Bhagavatula, Chandra and You, Chaobin and Thomson, Craig and Garbacea, Cristina and Wang, Dakuo and Deutsch, Daniel and Xiong, Deyi and Jin, Di and Gkatzia, Dimitra and Radev, Dragomir and Clark, Elizabeth and Durmus, Esin and Ladhak, Faisal and Ginter, Filip and Winata, Genta Indra and Strobelt, Hendrik and Hayashi, Hiroaki and Novikova, Jekaterina and Kanerva, Jenna and Chim, Jenny and Zhou, Jiawei and Clive, Jordan and Maynez, Joshua and Sedoc, Jo{\~a}o and Juraska, Juraj and Dhole, Kaustubh and Chandu, Khyathi Raghavi and Beltrachini, Laura Perez and Ribeiro, Leonardo F . R. and Tunstall, Lewis and Zhang, Li and Pushkarna, Mahim and Creutz, Mathias and White, Michael and Kale, Mihir Sanjay and Eddine, Moussa Kamal and Daheim, Nico and Subramani, Nishant and Dusek, Ondrej and Liang, Paul Pu and Ammanamanchi, Pawan Sasanka and Zhu, Qi and Puduppully, Ratish and Kriz, Reno and Shahriyar, Rifat and Cardenas, Ronald and Mahamood, Saad and Osei, Salomey and Cahyawijaya, Samuel and {\v{S}}tajner, Sanja and Montella, Sebastien and Jolly, Shailza and Mille, Simon and Hasan, Tahmid and Shen, Tianhao and Adewumi, Tosin and Raunak, Vikas and Raheja, Vipul and Nikolaev, Vitaly and Tsai, Vivian and Jernite, Yacine and Xu, Ying and Sang, Yisi and Liu, Yixin and Hou, Yufang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
266--281
Evaluations in machine learning rarely use the latest metrics, datasets, or human evaluation in favor of remaining compatible with prior work. The compatibility, often facilitated through leaderboards, thus leads to outdated but standardized evaluation practices. We pose that the standardization is taking place in the wrong spot. Evaluation infrastructure should enable researchers to use the latest methods and what should be standardized instead is how to incorporate these new evaluation advances. We introduce GEMv2, the new version of the Generation, Evaluation, and Metrics Benchmark which uses a modular infrastructure for dataset, model, and metric developers to benefit from each other`s work. GEMv2 supports 40 documented datasets in 51 languages, ongoing online evaluation for all datasets, and our interactive tools make it easier to add new datasets to the living benchmark.
null
null
10.18653/v1/2022.emnlp-demos.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,993
inproceedings
chowdhury-etal-2022-kgi
{KGI}: An Integrated Framework for Knowledge Intensive Language Tasks
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.28/
Chowdhury, Md Faisal Mahbub and Glass, Michael and Rossiello, Gaetano and Gliozzo, Alfio and Mihindukulasooriya, Nandana
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
282--288
In this paper, we present a system to showcase the capabilities of the latest state-of-the-art retrieval augmented generation models trained on knowledge-intensive language tasks, such as slot filling, open domain question answering, dialogue, and fact-checking. Moreover, given a user query, we show how the output from these different models can be combined to cross-examine the outputs of each other. Particularly, we show how accuracy in dialogue can be improved using the question answering model. We are also releasing all models used in the demo as a contribution of this paper. A short video demonstrating the system is available at \url{https://ibm.box.com/v/emnlp2022-demos}.
null
null
10.18653/v1/2022.emnlp-demos.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,994
inproceedings
bianchi-etal-2022-twitter
{T}witter-Demographer: A Flow-based Tool to Enrich {T}witter Data
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.29/
Bianchi, Federico and Cutrona, Vincenzo and Hovy, Dirk
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
289--297
Twitter data have become essential to Natural Language Processing (NLP) and social science research, driving various scientific discoveries in recent years. However, the textual data alone are often not enough to conduct studies: especially, social scientists need more variables to perform their analysis and control for various factors. How we augment this information, such as users' location, age, or tweet sentiment, has ramifications for anonymity and reproducibility, and requires dedicated effort. This paper describes Twitter-Demographer, a simple, flow-based tool to enrich Twitter data with additional information about tweets and users. {\textbackslash}tool is aimed at NLP practitioners, psycho-linguists, and (computational) social scientists who want to enrich their datasets with aggregated information, facilitating reproducibility, and providing algorithmic privacy-by-design measures for pseudo-anonymity. We discuss our design choices, inspired by the flow-based programming paradigm, to use black-box components that can easily be chained together and extended. We also analyze the ethical issues related to the use of this tool, and the built-in measures to facilitate pseudo-anonymity.
null
null
10.18653/v1/2022.emnlp-demos.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,995
inproceedings
gauthier-melancon-etal-2022-azimuth
Azimuth: Systematic Error Analysis for Text Classification
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.30/
Gauthier-melancon, Gabrielle and Marquez Ayala, Orlando and Brin, Lindsay and Tyler, Chris and Branchaud-charron, Frederic and Marinier, Joseph and Grande, Karine and Le, Di
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
298--310
We present Azimuth, an open-source and easy-to-use tool to perform error analysis for text classification. Compared to other stages of the ML development cycle, such as model training and hyper-parameter tuning, the process and tooling for the error analysis stage are less mature. However, this stage is critical for the development of reliable and trustworthy AI systems. To make error analysis more systematic, we propose an approach comprising dataset analysis and model quality assessment, which Azimuth facilitates. We aim to help AI practitioners discover and address areas where the model does not generalize by leveraging and integrating a range of ML techniques, such as saliency maps, similarity, uncertainty, and behavioral analyses, all in one tool. Our code and documentation are available at github.com/servicenow/azimuth.
null
null
10.18653/v1/2022.emnlp-demos.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,996
inproceedings
bai-etal-2022-synkb
{S}yn{KB}: Semantic Search for Synthetic Procedures
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.31/
Bai, Fan and Ritter, Alan and Madrid, Peter and Freitag, Dayne and Niekrasz, John
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
311--318
In this paper we present SynKB, an open-source, automatically extracted knowledge base of chemical synthesis protocols. Similar to proprietary chemistry databases such as Reaxsys, SynKB allows chemists to retrieve structured knowledge about synthetic procedures. By taking advantage of recent advances in natural language processing for procedural texts, SynKB supports more flexible queries about reaction conditions, and thus has the potential to help chemists search the literature for conditions used in relevant reactions as they design new synthetic routes. Using customized Transformer models to automatically extract information from 6 million synthesis procedures described in U.S. and EU patents, we show that for many queries, SynKB has higher recall than Reaxsys, while maintaining high precision. We plan to make SynKB available as an open-source tool; in contrast, proprietary chemistry databases require costly subscriptions.
null
null
10.18653/v1/2022.emnlp-demos.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,997
inproceedings
obeid-etal-2022-camelira
Camelira: An {A}rabic Multi-Dialect Morphological Disambiguator
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.32/
Obeid, Ossama and Inoue, Go and Habash, Nizar
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
319--326
We present Camelira, a web-based Arabic multi-dialect morphological disambiguation tool that covers four major variants of Arabic: Modern Standard Arabic, Egyptian, Gulf, and Levantine.Camelira offers a user-friendly web interface that allows researchers and language learners to explore various linguistic information, such as part-of-speech, morphological features, and lemmas. Our system also provides an option to automatically choose an appropriate dialect-specific disambiguator based on the prediction of a dialect identification component. Camelira is publicly accessible at \url{http://camelira.camel-lab.com}.
null
null
10.18653/v1/2022.emnlp-demos.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,998
inproceedings
pei-etal-2022-potato
{POTATO}: The Portable Text Annotation Tool
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.33/
Pei, Jiaxin and Ananthasubramaniam, Aparna and Wang, Xingyao and Zhou, Naitian and Dedeloudis, Apostolos and Sargent, Jackson and Jurgens, David
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
327--337
We present POTATO, the Portable text annotation tool, a free, fully open-sourced annotation system that 1) supports labeling many types of text and multimodal data; 2) offers easy-to-configure features to maximize the productivity of both deployers and annotators (convenient templates for common ML/NLP tasks, active learning, keypress shortcuts, keyword highlights, tooltips); and 3) supports a high degree of customization (editable UI, inserting pre-screening questions, attention and qualification tests). Experiments over two annotation tasks suggest that POTATO improves labeling speed through its specially-designed productivity features, especially for long documents and complex tasks. POTATO is available at \url{https://github.com/davidjurgens/potato} and will continue to be updated.
null
null
10.18653/v1/2022.emnlp-demos.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
27,999
inproceedings
widjaja-etal-2022-kgxboard
{KG}x{B}oard: Explainable and Interactive Leaderboard for Evaluation of Knowledge Graph Completion Models
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.34/
Widjaja, Haris and Gashteovski, Kiril and Ben Rim, Wiem and Liu, Pengfei and Malon, Christopher and Ruffinelli, Daniel and Lawrence, Carolin and Neubig, Graham
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
338--350
Knowledge Graphs (KGs) store information in the form of (head, predicate, tail)-triples. To augment KGs with new knowledge, researchers proposed models for KG Completion (KGC) tasks such as link prediction; i.e., answering (h; p; ?) or (?; p; t) queries. Such models are usually evaluated with averaged metrics on a held-out test set. While useful for tracking progress, averaged single-score metrics cannotreveal what exactly a model has learned {---} or failed to learn. To address this issue, we propose KGxBoard: an interactive framework for performing fine-grained evaluation on meaningful subsets of the data, each of which tests individual and interpretable capabilities of a KGC model. In our experiments, we highlight the findings that we discovered with the use of KGxBoard, which would have been impossible to detect with standard averaged single-score metrics.
null
null
10.18653/v1/2022.emnlp-demos.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,000
inproceedings
goyal-etal-2022-falte
{FALTE}: A Toolkit for Fine-grained Annotation for Long Text Evaluation
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.35/
Goyal, Tanya and Li, Junyi Jessy and Durrett, Greg
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
351--358
A growing swath of NLP research is tackling problems related to generating long text, including tasks such as open-ended story generation, summarization, dialogue, and more. However, we currently lack appropriate tools to evaluate these long outputs of generation models: classic automatic metrics such as ROUGE have been shown to perform poorly, and newer learned metrics do not necessarily work wellfor all tasks and domains of text. Human rating and error analysis remains a crucial component for any evaluation of long text generation. In this paper, we introduce FALTE, a web-based annotation toolkit designed to address this shortcoming. Our tool allows researchers to collect fine-grained judgments of text quality from crowdworkers using an error taxonomy specific to the downstream task. Using the taskinterface, annotators can select and assign error labels to text span selections in an incremental paragraph-level annotation workflow. The latter functionality is designed to simplify the document-level task into smaller units and reduce cognitive load on the annotators. Our tool has previously been used to run a large-scale annotation study that evaluates the coherence of long generated summaries, demonstrating its utility.
null
null
10.18653/v1/2022.emnlp-demos.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,001
inproceedings
rajani-etal-2022-seal
{SEAL}: Interactive Tool for Systematic Error Analysis and Labeling
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.36/
Rajani, Nazneen and Liang, Weixin and Chen, Lingjiao and Mitchell, Margaret and Zou, James
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
359--370
With the advent of Transformers, large language models (LLMs) have saturated well-known NLP benchmarks and leaderboards with high aggregate performance. However, many times these models systematically fail on tail data or rare groups not obvious in aggregate evaluation. Identifying such problematic data groups is even more challenging when there are no explicit labels (e.g., ethnicity, gender, etc.) and further compounded for NLP datasets due to the lack of visual features to characterize failure modes (e.g., Asian males, animals indoors, waterbirds on land etc.). This paper introduces an interactive Systematic Error Analysis and Labeling (SEAL) tool that uses a two-step approach to first identify high-error slices of data and then, in the second step, introduce methods to give human-understandable semantics to those underperforming slices. We explore a variety of methods for coming up with coherent semantics for the error groups using language models for semantic labeling and a text-to-image model for generating visual features.SEAL is available at \url{https://huggingface.co/spaces/nazneen/seal}.
null
null
10.18653/v1/2022.emnlp-demos.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,002
inproceedings
pacheco-etal-2022-hands
Hands-On Interactive Neuro-Symbolic {NLP} with {DR}ai{L}
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.37/
Pacheco, Maria Leonor and Roy, Shamik and Goldwasser, Dan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
371--378
We recently introduced DRaiL, a declarative neural-symbolic modeling framework designed to support a wide variety of NLP scenarios. In this paper, we enhance DRaiL with an easy to use Python interface, equipped with methods to define, modify and augment DRaiL models interactively, as well as with methods to debug and visualize the predictions made. We demonstrate this interface with a challenging NLP task: predicting sentence and entity level moral sentiment in political tweets.
null
null
10.18653/v1/2022.emnlp-demos.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,003
inproceedings
wieting-etal-2022-paraphrastic
Paraphrastic Representations at Scale
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.38/
Wieting, John and Gimpel, Kevin and Neubig, Graham and Berg-kirkpatrick, Taylor
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
379--388
We present a system that allows users to train their own state-of-the-art paraphrastic sentence representations in a variety of languages. We release trained models for English, Arabic, German, Spanish, French, Russian, Turkish, and Chinese. We train these models on large amounts of data, achieving significantly improved performance from our original papers on a suite of monolingual semantic similarity, cross-lingual semantic similarity, and bitext mining tasks. Moreover, the resulting models surpass all prior work on efficient unsupervised semantic textual similarity, even significantly outperforming supervised BERT-based models like Sentence-BERT (Reimers and Gurevych, 2019). Most importantly, our models are orders of magnitude faster than other strong similarity models and can be used on CPU with little difference in inference speed (even improved speed over GPU when using more CPU cores), making these models an attractive choice for users without access to GPUs or for use on embedded devices. Finally, we add significantly increased functionality to the code bases for training paraphrastic sentence models, easing their use for both inference and for training them for any desired language with parallel data. We also include code to automatically download and preprocess training data.
null
null
10.18653/v1/2022.emnlp-demos.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,004
inproceedings
razeghi-etal-2022-snoopy
Snoopy: An Online Interface for Exploring the Effect of Pretraining Term Frequencies on Few-Shot {LM} Performance
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.39/
Razeghi, Yasaman and Mekala, Raja Sekhar Reddy and Logan Iv, Robert L and Gardner, Matt and Singh, Sameer
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
389--395
Current evaluation schemes for large language models often fail to consider the impact of the overlap between pretraining corpus and test data on model performance statistics. Snoopy is an online interface that allows researchers to study this impact in few-shot learning settings. Our demo provides term frequency statistics for the Pile, which is an 800 GB corpus, accompanied by the precomputed performance of EleutherAI/GPT models on more than 20 NLP benchmarks, including numerical, commonsense reasoning, natural language understanding, and question-answering tasks. Snoopy allows a user to interactively align specific terms in test instances with their frequency in the Pile, enabling exploratory analysis of how term frequency is related to the accuracy of the models, which are hard to discover through automated means. A user can look at correlations over various model sizes and numbers of in-context examples and visualize the result across multiple (potentially aggregated) datasets. Using Snoopy, we show that a researcher can quickly replicate prior analyses for numerical tasks, while simultaneously allowing for much more expansive exploration that was previously challenging. Snoopy is available at \url{https://nlp.ics.uci.edu/snoopy}.
null
null
10.18653/v1/2022.emnlp-demos.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,005
inproceedings
zhang-etal-2022-bmcook
{BMC}ook: A Task-agnostic Compression Toolkit for Big Models
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.40/
Zhang, Zhengyan and Gong, Baitao and Chen, Yingfa and Han, Xu and Zeng, Guoyang and Zhao, Weilin and Chen, Yanxu and Liu, Zhiyuan and Sun, Maosong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
396--405
Recently, pre-trained language models (PLMs) have achieved great success on various NLP tasks and have shown a trend of exponential growth in model size. To alleviate the unaffordable computational costs brought by the size growth, model compression has been widely explored. Existing efforts have achieved promising results in compressing medium-sized models for specific tasks, while task-agnostic compression for big models with over billions of parameters is rarely studied. Task-agnostic compression can provide an efficient and versatile big model for both prompting and delta tuning, leading to a more general impact than task-specific compression. Hence, we introduce a task-agnostic compression toolkit BMCook for big models. In BMCook, we implement four representative compression methods, including quantization, pruning, distillation, and MoEfication. Developers can easily combine these methods towards better efficiency. To evaluate BMCook, we apply it to compress T5-3B (a PLM with 3 billion parameters). We achieve nearly 12x efficiency improvement while maintaining over 97{\%} of the original T5-3B performance on three typical NLP benchmarks. Moreover, the final compressed model also significantly outperforms T5-base (a PLM with 220 million parameters), which has a similar computational cost. BMCook is publicly available at \url{https://github.com/OpenBMB/BMCook}.
null
null
10.18653/v1/2022.emnlp-demos.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,006
inproceedings
tsvigun-etal-2022-altoolbox
{ALT}oolbox: A Set of Tools for Active Learning Annotation of Natural Language Texts
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.41/
Tsvigun, Akim and Sanochkin, Leonid and Larionov, Daniil and Kuzmin, Gleb and Vazhentsev, Artem and Lazichny, Ivan and Khromov, Nikita and Kireev, Danil and Rubashevskii, Aleksandr and Shahmatova, Olga and Dylov, Dmitry V. and Galitskiy, Igor and Shelmanov, Artem
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
406--434
We present ALToolbox {--} an open-source framework for active learning (AL) annotation in natural language processing. Currently, the framework supports text classification, sequence tagging, and seq2seq tasks. Besides state-of-the-art query strategies, ALToolbox provides a set of tools that help to reduce computational overhead and duration of AL iterations and increase annotated data reusability. The framework aims to support data scientists and researchers by providing an easy-to-deploy GUI annotation tool directly in the Jupyter IDE and an extensible benchmark for novel AL methods. We prepare a small demonstration of ALToolbox capabilities available online. The code of the framework is published under the MIT license.
null
null
10.18653/v1/2022.emnlp-demos.41
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,007
inproceedings
tang-etal-2022-textbox
{T}ext{B}ox 2.0: A Text Generation Library with Pre-trained Language Models
Che, Wanxiang and Shutova, Ekaterina
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-demos.42/
Tang, Tianyi and Li, Junyi and Chen, Zhipeng and Hu, Yiwen and Yu, Zhuohao and Dai, Wenxun and Zhao, Wayne Xin and Nie, Jian-yun and Wen, Ji-rong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
435--444
To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2.0, focusing on the use of pre-trained language models (PLMs). To be comprehensive, our library covers 13 common text generation tasks and their corresponding 83 datasets and further incorporates 45 PLMs covering general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight PLMs. We also implement 4 efficient training strategies and provide 4 generation objectives for pre-training new PLMs from scratch. To be unified, we design the interfaces to support the entire research pipeline (from data loading to training and evaluation), ensuring that each step can be fulfilled in a unified way. Despite the rich functionality, it is easy to use our library, either through the friendly Python API or command line. To validate the effectiveness of our library, we conduct extensive experiments and exemplify four types of research scenarios. The project is released at the link: \url{https://github.com/RUCAIBox/TextBox#2.0}.
null
null
10.18653/v1/2022.emnlp-demos.42
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,008
inproceedings
fusco-etal-2022-unsupervised
Unsupervised Term Extraction for Highly Technical Domains
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.1/
Fusco, Francesco and Staar, Peter and Antognini, Diego
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
1--8
Term extraction is an information extraction task at the root of knowledge discovery platforms. Developing term extractors that are able to generalize across very diverse and potentially highly technical domains is challenging, as annotations for domains requiring in-depth expertise are scarce and expensive to obtain. In this paper, we describe the term extraction subsystem of a commercial knowledge discovery platform that targets highly technical fields such as pharma, medical, and material science. To be able to generalize across domains, we introduce a fully unsupervised annotator (UA). It extracts terms by combining novel morphological signals from sub-word tokenization with term-to-topic and intra-term similarity metrics, computed using general-domain pre-trained sentence-encoders. The annotator is used to implement a weakly-supervised setup, where transformer-models are fine-tuned (or pre-trained) over the training data generated by running the UA over large unlabeled corpora. Our experiments demonstrate that our setup can improve the predictive performance while decreasing the inference latency on both CPUs and GPUs. Our annotators provide a very competitive baseline for all the cases where annotations are not available.
null
null
10.18653/v1/2022.emnlp-industry.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,010
inproceedings
sun-etal-2022-dynamar
{D}yna{M}a{R}: Dynamic Prompt with Mask Token Representation
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.2/
Sun, Xiaodi and Rajagopalan, Sunny and Nigam, Priyanka and Lu, Weiyi and Xu, Yi and Keivanloo, Iman and Zeng, Belinda and Chilimbi, Trishul
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
9--17
Recent research has shown that large language models pretrained using unsupervised approaches can achieve significant performance improvement on many downstream tasks. Typically when adapting these language models to downstream tasks, like a classification or regression task, we employ a fine-tuning paradigm in which the sentence representation from the language model is input to a task-specific head; the model is then fine-tuned end-to-end. However, with the emergence of models like GPT-3, prompt-based fine-tuning has been proven to be a successful approach for few-shot tasks. Inspired by this work, we study discrete prompt technologies in practice. There are two issues that arise with the standard prompt approach. First, it can overfit on the prompt template. Second, it requires manual effort to formulate the downstream task as a language model problem. In this paper, we propose an improvement to prompt-based fine-tuning that addresses these two issues. We refer to our approach as DynaMaR {--} Dynamic Prompt with Mask Token Representation. Results show that DynaMaR can achieve an average improvement of 10{\%} in few-shot settings and improvement of 3.7{\%} in data-rich settings over the standard fine-tuning approach on four e-commerce applications.
null
null
10.18653/v1/2022.emnlp-industry.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,011
inproceedings
soltan-etal-2022-hybrid
A Hybrid Approach to Cross-lingual Product Review Summarization
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.3/
Soltan, Saleh and Soto, Victor and Tran, Ke and Hamza, Wael
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
18--28
We present a hybrid approach for product review summarization which consists of: (i) an unsupervised extractive step to extract the most important sentences out of all the reviews, and (ii) a supervised abstractive step to summarize the extracted sentences into a coherent short summary. This approach allows us to develop an efficient cross-lingual abstractive summarizer that can generate summaries in any language, given the extracted sentences out of thousands of reviews in a source language. In order to train and test the abstractive model, we create the Cross-lingual Amazon Reviews Summarization (CARS) dataset which provides English summaries for training, and English, French, Italian, Arabic, and Hindi summaries for testing based on selected English reviews. We show that the summaries generated by our model are as good as human written summaries in coherence, informativeness, non-redundancy, and fluency.
null
null
10.18653/v1/2022.emnlp-industry.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,012
inproceedings
ramamonjison-etal-2022-augmenting
Augmenting Operations Research with Auto-Formulation of Optimization Models From Problem Descriptions
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.4/
Ramamonjison, Rindra and Li, Haley and Yu, Timothy and He, Shiqi and Rengan, Vishnu and Banitalebi-dehkordi, Amin and Zhou, Zirui and Zhang, Yong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
29--62
We describe an augmented intelligence system for simplifying and enhancing the modeling experience for operations research. Using this system, the user receives a suggested formulation of an optimization problem based on its description. To facilitate this process, we build an intuitive user interface system that enables the users to validate and edit the suggestions. We investigate controlled generation techniques to obtain an automatic suggestion of formulation. Then, we evaluate their effectiveness with a newly created dataset of linear programming problems drawn from various application domains.
null
null
10.18653/v1/2022.emnlp-industry.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,013
inproceedings
liu-etal-2022-knowledge
Knowledge Distillation based Contextual Relevance Matching for {E}-commerce Product Search
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.5/
Liu, Ziyang and Wang, Chaokun and Feng, Hao and Wu, Lingfei and Yang, Liqun
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
63--76
Online relevance matching is an essential task of e-commerce product search to boost the utility of search engines and ensure a smooth user experience. Previous work adopts either classical relevance matching models or Transformer-style models to address it. However, they ignore the inherent bipartite graph structures that are ubiquitous in e-commerce product search logs and are too inefficient to deploy online. In this paper, we design an efficient knowledge distillation framework for e-commerce relevance matching to integrate the respective advantages of Transformer-style models and classical relevance matching models. Especially for the core student model of the framework, we propose a novel method using k-order relevance modeling. The experimental results on large-scale real-world data (the size is 6 174 million) show that the proposed method significantly improves the prediction accuracy in terms of human relevance judgment. We deploy our method to JD.com online search platform. The A/B testing results show that our method significantly improves most business metrics under price sort mode and default sort mode.
null
null
10.18653/v1/2022.emnlp-industry.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,014
inproceedings
purpura-etal-2022-accelerating
Accelerating the Discovery of Semantic Associations from Medical Literature: Mining Relations Between Diseases and Symptoms
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.6/
Purpura, Alberto and Bonin, Francesca and Bettencourt-silva, Joao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
77--89
Medical literature is a vast and constantly expanding source of information about diseases, their diagnoses and treatments. One of the ways to extract insights from this type of data is through mining association rules between such entities. However, existing solutions do not take into account the semantics of sentences from which entity co-occurrences are extracted. We propose a scalable solution for the automated discovery of semantic associations between different entities such as diseases and their symptoms. Our approach employs the UMLS semantic network and a binary relation classification model trained with distant supervision to validate and help ranking the most likely entity associations pairs extracted with frequency-based association rule mining algorithms. We evaluate the proposed system on the task of extracting disease-symptom associations from a collection of over 14M PubMed abstracts and validate our results against a publicly available known list of disease-symptom pairs.
null
null
10.18653/v1/2022.emnlp-industry.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,015
inproceedings
uma-naresh-etal-2022-pentatron
{PENTATRON}: {PE}rsonalized co{NT}ext-Aware Transformer for Retrieval-based c{O}nversational u{N}derstanding
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.7/
Uma Naresh, Niranjan and Jiang, Ziyan and Ankit, Ankit and Lee, Sungjin and Hao, Jie and Fan, Xing and Guo, Chenlei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
90--98
Conversational understanding is an integral part of modern intelligent devices. In a large fraction of the global traffic from customers using smart digital assistants, frictions in dialogues may be attributed to incorrect understanding of the entities in a customer`s query due to factors including ambiguous mentions, mispronunciation, background noise and faulty on-device signal processing. Such errors are compounded by two common deficiencies from intelligent devices namely, (1) the device not being tailored to individual customers, and (2) the device responses being unaware of the context in the conversation session. Viewing this problem via the lens of retrieval-based search engines, we build and evaluate a scalable entity correction system, PENTATRON. The system leverages a parametric transformer-based language model to learn patterns from in-session customer-device interactions coupled with a non-parametric personalized entity index to compute the correct query, which aids downstream components in reasoning about the best response. In addition to establishing baselines and demonstrating the value of personalized and context-aware systems, we use multitasking to learn the domain of the correct entity. We also investigate the utility of language model prompts. Through extensive experiments, we show a significant upward movement of the key metric (Exact Match) by up to 500.97{\%} (relative to the baseline).
null
null
10.18653/v1/2022.emnlp-industry.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,016
inproceedings
zhang-misra-2022-machine
Machine translation impact in {E}-commerce multilingual search
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.8/
Zhang, Bryan and Misra, Amita
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
99--109
Previous work suggests that performance of cross-lingual information retrieval correlates highly with the quality of Machine Translation. However, there may be a threshold beyond which improving query translation quality yields little or no benefit to further improve the retrieval performance. This threshold may depend upon multiple factors including the source and target languages, the existing MT system quality and the search pipeline. In order to identify the benefit of improving an MT system for a given search pipeline, we investigate the sensitivity of retrieval quality to the presence of different levels of MT quality using experimental datasets collected from actual traffic. We systematically improve the performance of our MT systems quality on language pairs as measured by MT evaluation metrics including Bleu and Chrf to determine their impact on search precision metrics and extract signals that help to guide the improvement strategies. Using this information we develop techniques to compare query translations for multiple language pairs and identify the most promising language pairs to invest and improve.
null
null
10.18653/v1/2022.emnlp-industry.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,017
inproceedings
ding-etal-2022-ask
Ask-and-Verify: Span Candidate Generation and Verification for Attribute Value Extraction
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.9/
Ding, Yifan and Liang, Yan and Zalmout, Nasser and Li, Xian and Grant, Christan and Weninger, Tim
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
110--110
The product attribute value extraction (AVE) task aims to capture key factual information from product profiles, and is useful for several downstream applications in e-Commerce platforms. Previous contributions usually formulate this task using sequence labeling or reading comprehension architectures. However, sequence labeling models tend to be conservative in their predictions resulting in a high false negative rate. Existing reading comprehension formulations, on the other hand, can over-generate attribute values which hinders precision. In the present work we address these limitations with a new end-to-end pipeline framework called Ask-and-Verify. Given a product and an attribute query, the Ask step detects the top-K span candidates (i.e. possible attribute values) from the product profiles, then the Verify step filters out false positive candidates. We evaluate Ask-and-Verify model on Amazon`s product pages and AliExpress public dataset, and present a comparative analysis as well as a detailed ablation study. Despite its simplicity, we show that Ask-and-Verify outperforms recent state-of-the-art models by up to 3.1{\%} F1 absolute improvement points, while also scaling to thousands of attributes.
null
null
10.18653/v1/2022.emnlp-industry.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,018
inproceedings
savkov-etal-2022-consultation
Consultation Checklists: Standardising the Human Evaluation of Medical Note Generation
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.10/
Savkov, Aleksandar and Moramarco, Francesco and Papadopoulos Korfiatis, Alex and Perera, Mark and Belz, Anya and Reiter, Ehud
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
111--120
Evaluating automatically generated text is generally hard due to the inherently subjective nature of many aspects of the output quality. This difficulty is compounded in automatic consultation note generation by differing opinions between medical experts both about which patient statements should be included in generated notes and about their respective importance in arriving at a diagnosis. Previous real-world evaluations of note-generation systems saw substantial disagreement between expert evaluators. In this paper we propose a protocol that aims to increase objectivity by grounding evaluations in Consultation Checklists, which are created in a preliminary step and then used as a common point of reference during quality assessment. We observed good levels of inter-annotator agreement in a first evaluation study using the protocol; further, using Consultation Checklists produced in the study as reference for automatic metrics such as ROUGE or BERTScore improves their correlation with human judgements compared to using the original human note.
null
null
10.18653/v1/2022.emnlp-industry.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,019
inproceedings
do-etal-2022-towards
Towards Need-Based Spoken Language Understanding Model Updates: What Have We Learned?
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.11/
Do, Quynh and Gaspers, Judith and Sorokin, Daniil and Lehnen, Patrick
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
121--127
In productionized machine learning systems, online model performance is known to deteriorate over time when there is a distributional drift between offline training and online application data. As a remedy, models are typically retrained at fixed time intervals, implying high computational and manual costs. This work aims at decreasing such costs in productionized, large-scale Spoken Language Understanding systems. In particular, we develop a need-based re-training strategy guided by an efficient drift detector and discuss the arising challenges including system complexity, overlapping model releases, observation limitation and the absence of annotated resources at runtime. We present empirical results on historical data and confirm the utility of our design decisions via an online A/B experiment.
null
null
10.18653/v1/2022.emnlp-industry.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,020
inproceedings
peris-etal-2022-knowledge
Knowledge Distillation Transfer Sets and their Impact on Downstream {NLU} Tasks
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.12/
Peris, Charith and Tan, Lizhen and Gueudre, Thomas and Gojayev, Turan and Wei, Pan and Oz, Gokmen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
128--137
Teacher-student knowledge distillation is a popular technique for compressing today`s prevailing large language models into manageable sizes that fit low-latency downstream applications. Both the teacher and the choice of transfer set used for distillation are crucial ingredients in creating a high quality student. Yet, the generic corpora used to pretrain the teacher and the corpora associated with the downstream target domain are often significantly different, which raises a natural question: should the student be distilled over the generic corpora, so as to learn from high-quality teacher predictions, or over the downstream task corpora to align with finetuning? Our study investigates this trade-off using Domain Classification (DC) and Intent Classification/Named Entity Recognition (ICNER) as downstream tasks. We distill several multilingual students from a larger multilingual LM with varying proportions of generic and task-specific datasets, and report their performance after finetuning on DC and ICNER. We observe significant improvements across tasks and test sets when only task-specific corpora is used. We also report on how the impact of adding task-specific data to the transfer set correlates with the similarity between generic and task-specific data. Our results clearly indicate that, while distillation from a generic LM benefits downstream tasks, students learn better using target domain data even if it comes at the price of noisier teacher predictions. In other words, target domain data still trumps teacher knowledge.
null
null
10.18653/v1/2022.emnlp-industry.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,021
inproceedings
aguirre-etal-2022-exploiting
Exploiting In-Domain Bilingual Corpora for Zero-Shot Transfer Learning in {NLU} of Intra-Sentential Code-Switching Chatbot Interactions
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.13/
Aguirre, Maia and Serras, Manex and Garc{\'i}a-sardi{\~n}a, Laura and L{\'o}pez-fern{\'a}ndez, Jacobo and M{\'e}ndez, Ariane and Del Pozo, Arantza
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
138--144
Code-switching (CS) is a very common phenomenon in regions with various co-existing languages. Since CS is such a frequent habit in informal communications, both spoken and written, it also arises naturally in Human-Machine Interactions. Therefore, in order for natural language understanding (NLU) not to be degraded, CS must be taken into account when developing chatbots. The co-existence of multiple languages in a single NLU model has become feasible with multilingual language representation models such as mBERT. In this paper, the efficacy of zero-shot cross-lingual transfer learning with mBERT for NLU is evaluated on a Basque-Spanish CS chatbot corpus, comparing the performance of NLU models trained using in-domain chatbot utterances in Basque and/or Spanish without CS. The results obtained indicate that training joint multi-intent classification and entity recognition models on both languages simultaneously achieves best performance, better capturing the CS patterns.
null
null
10.18653/v1/2022.emnlp-industry.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,022
inproceedings
wang-etal-2022-calibrating
Calibrating Imbalanced Classifiers with Focal Loss: An Empirical Study
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.14/
Wang, Cheng and Balazs, Jorge and Szarvas, Gy{\"orgy and Ernst, Patrick and Poddar, Lahari and Danchenko, Pavel
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
145--153
Imbalanced data distribution is a practical and common challenge in building production-level machine learning (ML) models in industry, where data usually exhibits long-tail distributions. For instance, in virtual AI Assistants, such as Google Assistant, Amazon Alexa and Apple Siri, the {\textquotedblleft}play music{\textquotedblright} or {\textquotedblleft}set timer{\textquotedblright} utterance is exposed to an order of magnitude more traffic than other skills. This can easily cause trained models to overfit to the majority classes, categories or intents, lead to model miscalibration. The uncalibrated models output unreliable (mostly overconfident) predictions, which are at high risk of affecting downstream decision-making systems. In this work, we study the calibration of production models in the industry use-case of predicting product return reason codes in customer service conversations of an online retail store; The returns reasons also exhibit class imbalance. To alleviate the resulting miscalibration in the production ML model, we streamline the model development and deployment using focal loss (CITATION).We empirically show the effectiveness of model training with focal loss in learning better calibrated models, as compared to standard cross-entropy loss. Better calibration, in turn, enables better control of the precision-recall trade-off for the models deployed in production.
null
null
10.18653/v1/2022.emnlp-industry.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,023
inproceedings
garrido-ramas-etal-2022-unsupervised
Unsupervised training data re-weighting for natural language understanding with local distribution approximation
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.15/
Garrido Ramas, Jose and Le, Dieu-thu and Chen, Bei and Kumar, Manoj and Rottmann, Kay
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
154--160
One of the major challenges of training Natural Language Understanding (NLU) production models lies in the discrepancy between the distributions of the offline training data and of the online live data, due to, e.g., biased sampling scheme, cyclic seasonality shifts, annotated training data coming from a variety of different sources, and a changing pool of users. Consequently, the model trained by the offline data is biased. We often observe this problem especially in task-oriented conversational systems, where topics of interest and the characteristics of users using the system change over time. In this paper we propose an unsupervised approach to mitigate the offline training data sampling bias in multiple NLU tasks. We show that a local distribution approximation in the pre-trained embedding space enables the estimation of importance weights for training samples guiding re-sampling for an effective bias mitigation. We illustrate our novel approach using multiple NLU datasets and show improvements obtained without additional annotation, making this a general approach for mitigating effects of sampling bias.
null
null
10.18653/v1/2022.emnlp-industry.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,024
inproceedings
chiu-shinzato-2022-cross
Cross-Encoder Data Annotation for Bi-Encoder Based Product Matching
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.16/
Chiu, Justin and Shinzato, Keiji
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
161--168
Matching a seller listed item to an appropriate product is an important step for an e-commerce platform. With the recent advancement in deep learning, there are different encoder based approaches being proposed as solution. When textual data for two products are available, cross-encoder approaches encode them jointly while bi-encoder approaches encode them separately. Since cross-encoders are computationally heavy, approaches based on bi-encoders are a common practice for this challenge. In this paper, we propose cross-encoder data annotation; a technique to annotate or refine human annotated training data for bi-encoder models using a cross-encoder model. This technique enables us to build a robust model without annotation on newly collected training data or further improve model performance on annotated training data. We evaluate the cross-encoder data annotation on the product matching task using a real-world e-commerce dataset containing 104 million products. Experimental results show that the cross-encoder data annotation improves 4{\%} absolute accuracy when no annotation for training data is available, and 2{\%} absolute accuracy when annotation for training data is available.
null
null
10.18653/v1/2022.emnlp-industry.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,025
inproceedings
poddar-etal-2022-deploying
Deploying a Retrieval based Response Model for Task Oriented Dialogues
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.17/
Poddar, Lahari and Szarvas, Gy{\"orgy and Wang, Cheng and Balazs, Jorge and Danchenko, Pavel and Ernst, Patrick
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
169--178
Task-oriented dialogue systems in industry settings need to have high conversational capability, be easily adaptable to changing situations and conform to business constraints. This paper describes a 3-step procedure to develop a conversational model that satisfies these criteria and can efficiently scale to rank a large set of response candidates. First, we provide a simple algorithm to semi-automatically create a high-coverage template set from historic conversations without any annotation. Second, we propose a neural architecture that encodes the dialogue context and applicable business constraints as profile features for ranking the next turn. Third, we describe a two-stage learning strategy with self-supervised training, followed by supervised fine-tuning on limited data collected through a human-in-the-loop platform. Finally, we describe offline experiments and present results of deploying our model with human-in-the-loop to converse with live customers online.
null
null
10.18653/v1/2022.emnlp-industry.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,026
inproceedings
vo-etal-2022-tackling
Tackling Temporal Questions in Natural Language Interface to Databases
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.18/
Vo, Ngoc Phuoc An and Popescu, Octavian and Manotas, Irene and Sheinin, Vadim
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
179--187
Temporal aspect is one of the most challenging areas in Natural Language Interface to Databases (NLIDB). This paper addresses and examines how temporal questions being studied and supported by the research community at both levels: popular annotated dataset (e.g. Spider) and recent advanced models. We present a new dataset with accompanied databases supporting temporal questions in NLIDB. We experiment with two SOTA models (Picard and ValueNet) to investigate how our new dataset helps these models learn and improve performance in temporal aspect.
null
null
10.18653/v1/2022.emnlp-industry.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,027
inproceedings
vishwanathan-etal-2022-multi
Multi-Tenant Optimization For Few-Shot Task-Oriented {FAQ} Retrieval
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.19/
Vishwanathan, Asha and Warrier, Rajeev and Vadakkekara Suresh, Gautham and Kandpal, Chandra Shekhar
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
188--197
Business-specific Frequently Asked Questions (FAQ) retrieval in task-oriented dialog systems poses unique challenges vis {\`a} vis community based FAQs. Each FAQ question represents an intent which is usually an umbrella term for many related user queries. We evaluate performance for such Business FAQs both with standard FAQ retrieval techniques using query-Question (q-Q) similarity and few-shot intent detection techniques. Implementing a real-world solution for FAQ retrieval in order to support multiple tenants (FAQ sets) entails optimizing speed, accuracy and cost. We propose a novel approach to scale multi-tenant FAQ applications in real-world context by contrastive fine-tuning of the last layer in sentence Bi-Encoders along with tenant-specific weight switching.
null
null
10.18653/v1/2022.emnlp-industry.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,028
inproceedings
dekeyser-etal-2022-iterative
Iterative Stratified Testing and Measurement for Automated Model Updates
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.20/
Dekeyser, Elizabeth and Comment, Nicholas and Pei, Shermin and Kumar, Rajat and Rai, Shruti and Wu, Fengtao and Haverty, Lisa and Shimizu, Kanna
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
198--205
Automating updates to machine learning systems is an important but understudied challenge in AutoML. The high model variance of many cutting-edge deep learning architectures means that retraining a model provides no guarantee of accurate inference on all sample types. To address this concern, we present Automated Data-Shape Stratified Model Updates (ADSMU), a novel framework that relies on iterative model building coupled with data-shape stratified model testing and improvement. Using ADSMU, we observed a 26{\%} (relative) improvement in accuracy for new model use cases on a large-scale NLU system, compared to a naive (manually) retrained baseline and current cutting-edge methods.
null
null
10.18653/v1/2022.emnlp-industry.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,029
inproceedings
gandhi-etal-2022-slate
{SLATE}: A Sequence Labeling Approach for Task Extraction from Free-form Inked Content
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.21/
Gandhi, Apurva and Serrao, Ryan and Fang, Biyi and Antonius, Gilbert and Hong, Jenna and Nguyen, Tra My and Yi, Sheng and Nosakhare, Ehi and Shaffer, Irene and Srinivasan, Soundararajan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
206--217
We present SLATE, a sequence labeling approach for extracting tasks from free-form content such as digitally handwritten (or {\textquotedblleft}inked{\textquotedblright}) notes on a virtual whiteboard. Our approach allows us to create a single, low-latency model to simultaneously perform sentence segmentation and classification of these sentences into task/non-task sentences. SLATE greatly outperforms a baseline two-model (sentence segmentation followed by classification model) approach, achieving a task F1 score of 84.4{\%}, a sentence segmentation (boundary similarity) score of 88.4{\%} and three times lower latency compared to the baseline. Furthermore, we provide insights into tackling challenges of performing NLP on the inking domain. We release both our code and dataset for this novel task.
null
null
10.18653/v1/2022.emnlp-industry.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,030
inproceedings
rabinovich-etal-2022-gaining
Gaining Insights into Unrecognized User Utterances in Task-Oriented Dialog Systems
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.22/
Rabinovich, Ella and Vetzler, Matan and Boaz, David and Kumar, Vineet and Pandey, Gaurav and Anaby Tavor, Ateret
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
218--225
The rapidly growing market demand for automatic dialogue agents capable of goal-oriented behavior has caused many tech-industry leaders to invest considerable efforts into task-oriented dialog systems. The success of these systems is highly dependent on the accuracy of their intent identification {--} the process of deducing the goal or meaning of the user`s request and mapping it to one of the known intents for further processing. Gaining insights into unrecognized utterances {--} user requests the systems fails to attribute to a known intent {--} is therefore a key process in continuous improvement of goal-oriented dialog systems. We present an end-to-end pipeline for processing unrecognized user utterances, deployed in a real-world, commercial task-oriented dialog system, including a specifically-tailored clustering algorithm, a novel approach to cluster representative extraction, and cluster naming. We evaluated the proposed components, demonstrating their benefits in the analysis of unrecognized user requests.
null
null
10.18653/v1/2022.emnlp-industry.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,031
inproceedings
cao-etal-2022-cocoid
{C}o{C}o{ID}: Learning Contrastive Representations and Compact Clusters for Semi-Supervised Intent Discovery
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.23/
Cao, Qian and Xiong, Deyi and Wang, Qinlong and Peng, Xia
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
226--236
Intent discovery is to mine new intents from user utterances, which are not present in the set of manually predefined intents. Previous approaches to intent discovery usually automatically cluster novel intents with prior knowledge from intent-labeled data in a semi-supervised way. In this paper, we focus on the discriminative user utterance representation learning and the compactness of the learned intent clusters. We propose a novel semi-supervised intent discovery framework CoCoID with two essential components: contrastive user utterance representation learning and intra-cluster knowledge distillation. The former attempts to detect similar and dissimilar intents from a minibatch-wise perspective. The latter regularizes the predictive distribution of the model over samples in a cluster-wise way. We conduct experiments on both real-life challenging datasets (i.e., CLINC and BANKING) that are curated to emulate the true environment of commercial/production systems and traditional datasets (i.e., StackOverflow and DBPedia) to evaluate the proposed CoCoID. Experiment results demonstrate that our model substantially outperforms state-of-the-art intent discovery models (12 baselines) by over 1.4 ACC and ARI points and 1.1 NMI points across the four datasets. Further analyses suggest that CoCoID is able to learn contrastive representations and compact clusters for intent discovery.
null
null
10.18653/v1/2022.emnlp-industry.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,032
inproceedings
j-kurisinkel-chen-2022-tractable
Tractable {\&} Coherent Multi-Document Summarization: Discrete Optimization of Multiple Neural Modeling Streams via Integer Linear Programming
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.24/
J Kurisinkel, Litton and Chen, Nancy
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
237--243
One key challenge in multi-document summarization is the generated summary is often less coherent compared to single document summarization due to the larger heterogeneity of the input source content. In this work, we propose a generic framework to jointly consider coherence and informativeness in multi-document summarization and offers provisions to replace individual components based on the domain of source text. In particular, the framework characterizes coherence through verb transitions and entity mentions and takes advantage of syntactic parse trees and neural modeling for intra-sentential noise pruning. The framework cast the entire problem as an integer linear programming optimization problem with neural and non-neural models as linear components. We evaluate our method in the news and legal domains. The proposed approach consistently performs better than competitive baselines for both objective metrics and human evaluation.
null
null
10.18653/v1/2022.emnlp-industry.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,033
inproceedings
qiao-etal-2022-grafting
Grafting Pre-trained Models for Multimodal Headline Generation
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.25/
Qiao, Lingfeng and Wu, Chen and Liu, Ye and Peng, Haoyuan and Yin, Di and Ren, Bo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
244--253
Multimodal headline utilizes both video frames and transcripts to generate the natural language title of the videos. Due to a lack of large-scale, manually annotated data, the task of annotating grounded headlines for video is labor intensive and impractical. Previous researches on pre-trained language models and video-language models have achieved significant progress in related downstream tasks. However, none of them can be directly applied to multimodal headline architecture where we need both multimodal encoder and sentence decoder. A major challenge in simply gluing language model and video-language model is the modality balance, which is aimed at combining visual-language complementary abilities. In this paper, we propose a novel approach to graft the video encoder from the pre-trained video-language model on the generative pre-trained language model. We also present a consensus fusion mechanism for the integration of different components, via inter/intra modality relation. Empirically, experiments show that the grafted model achieves strong results on a brand-new dataset collected from real-world applications.
null
null
10.18653/v1/2022.emnlp-industry.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,034
inproceedings
le-etal-2022-semi
Semi-supervised Adversarial Text Generation based on {S}eq2{S}eq models
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.26/
Le, Hieu and Le, Dieu-thu and Weber, Verena and Church, Chris and Rottmann, Kay and Bradford, Melanie and Chin, Peter
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
254--262
To improve deep learning models' robustness, adversarial training has been frequently used in computer vision with satisfying results. However, adversarial perturbation on text have turned out to be more challenging due to the discrete nature of text. The generated adversarial text might not sound natural or does not preserve semantics, which is the key for real world applications where text classification is based on semantic meaning. In this paper, we describe a new way for generating adversarial samples by using pseudo-labeled in-domain text data to train a seq2seq model for adversarial generation and combine it with paraphrase detection. We showcase the benefit of our approach for a real-world Natural Language Understanding (NLU) task, which maps a user`s request to an intent. Furthermore, we experiment with gradient-based training for the NLU task and try using token importance scores to guide the adversarial text generation. We show that our approach can generate realistic and relevant adversarial samples compared to other state-of-the-art adversarial training methods. Applying adversarial training using these generated samples helps the NLU model to recover up to 70{\%} of these types of errors and makes the model more robust, especially in the tail distribution in a large scale real world application.
null
null
10.18653/v1/2022.emnlp-industry.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,035
inproceedings
fuchs-etal-2022-yet
Is it out yet? Automatic Future Product Releases Extraction from Web Data
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.27/
Fuchs, Gilad and Ben-shaul, Ido and Mandelbrod, Matan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
263--271
Identifying the release of new products and their predicted demand in advance is highly valuable for E-Commerce marketplaces and retailers. The information of an upcoming product release is used for inventory management, marketing campaigns and pre-order suggestions. Often, the announcement of an upcoming product release is widely available in multiple web pages such as blogs, chats or news articles. However, to the best of our knowledge, an automatic system to extract future product releases from web data has not been presented. In this work we describe an ML-powered multi-stage pipeline to automatically identify future product releases and rank their predicted demand from unstructured pages across the whole web. Our pipeline includes a novel Longformer-based model which uses a global attention mechanism guided by pre-calculated Named Entity Recognition predictions related to product releases. The model training data is based on a new corpus of 30K web pages manually annotated to identify future product releases. We made the dataset openly available at \url{https://doi.org/10.5281/zenodo.6894770}.
null
null
10.18653/v1/2022.emnlp-industry.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,036
inproceedings
lin-etal-2022-automatic-scene
Automatic Scene-based Topic Channel Construction System for {E}-Commerce
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.28/
Lin, Peng and Zou, Yanyan and Wu, Lingfei and Ma, Mian and Ding, Zhuoye and Long, Bo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
272--284
Scene marketing that well demonstrates user interests within a certain scenario has proved effective for offline shopping. To conduct scene marketing for e-commerce platforms, this work presents a novel product form, scene-based topic channel which typically consists of a list of diverse products belonging to the same usage scenario and a topic title that describes the scenario with marketing words. As manual construction of channels is time-consuming due to billions of products as well as dynamic and diverse customers' interests, it is necessary to leverage AI techniques to automatically construct channels for certain usage scenarios and even discover novel topics. To be specific, we first frame the channel construction task as a two-step problem, i.e., scene-based topic generation and product clustering, and propose an E-commerce Scene-based Topic Channel construction system (i.e., ESTC) to achieve automated production, consisting of scene-based topic generation model for the e-commerce domain, product clustering on the basis of topic similarity, as well as quality control based on automatic model filtering and human screening. Extensive offline experiments and online A/B test validates the effectiveness of such a novel product form as well as the proposed system. In addition, we also introduce the experience of deploying the proposed system on a real-world e-commerce recommendation platform.
null
null
10.18653/v1/2022.emnlp-industry.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,037
inproceedings
tang-etal-2022-speechnet
{S}peech{N}et: Weakly Supervised, End-to-End Speech Recognition at Industrial Scale
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.29/
Tang, Raphael and Kumar, Karun and Yang, Gefei and Pandey, Akshat and Mao, Yajie and Belyaev, Vladislav and Emmadi, Madhuri and Murray, Craig and Ture, Ferhan and Lin, Jimmy
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
285--293
End-to-end automatic speech recognition systems represent the state of the art, but they rely on thousands of hours of manually annotated speech for training, as well as heavyweight computation for inference. Of course, this impedes commercialization since most companies lack vast human and computational resources. In this paper, we explore training and deploying an ASR system in the label-scarce, compute-limited setting. To reduce human labor, we use a third-party ASR system as a weak supervision source, supplemented with labeling functions derived from implicit user feedback. To accelerate inference, we propose to route production-time queries across a pool of CUDA graphs of varying input lengths, the distribution of which best matches the traffic`s. Compared to our third-party ASR, we achieve a relative improvement in word-error rate of 8{\%} and a speedup of 600{\%}. Our system, called SpeechNet, currently serves 12 million queries per day on our voice-enabled smart television. To our knowledge, this is the first time a large-scale, Wav2vec-based deployment has been described in the academic literature.
null
null
10.18653/v1/2022.emnlp-industry.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,038
inproceedings
stowe-etal-2022-controlled
Controlled Language Generation for Language Learning Items
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.30/
Stowe, Kevin and Ghosh, Debanjan and Zhao, Mengxuan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
294--305
This work aims to employ natural language generation (NLG) to rapidly generate items for English language learning applications: this requires both language models capable of generating fluent, high-quality English, and to control the output of the generation to match the requirements of the relevant items. We experiment with deep pretrained models for this task, developing novel methods for controlling items for factors relevant in language learning: diverse sentences for different proficiency levels and argument structure to test grammar. Human evaluation demonstrates high grammatically scores for all models (3.4 and above out of 4), and higher length (24{\%}) and complexity (9{\%}) over the baseline for the advanced proficiency model. Our results show that we can achieve strong performance while adding additional control to ensure diverse, tailored content for individual users.
null
null
10.18653/v1/2022.emnlp-industry.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,039
inproceedings
wang-etal-2022-improving-text
Improving Text-to-{SQL} Semantic Parsing with Fine-grained Query Understanding
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.31/
Wang, Jun and Ng, Patrick and Li, Alexander Hanbo and Jiang, Jiarong and Wang, Zhiguo and Xiang, Bing and Nallapati, Ramesh and Sengupta, Sudipta
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
306--312
Most recent research on Text-to-SQL semantic parsing relies on either parser itself or simple heuristic based approach to understand natural language query (NLQ). When synthesizing a SQL query, there is no explicit semantic information of NLQ available to the parser which leads to undesirable generalization performance. In addition, without lexical-level fine-grained query understanding, linking between query and database can only rely on fuzzy string match which leads to suboptimal performance in real applications. In view of this, in this paper we present a general-purpose, modular neural semantic parsing framework that is based on token-level fine-grained query understanding. Our framework consists of three modules: named entity recognizer (NER), neural entity linker (NEL) and neural semantic parser (NSP). By jointly modeling query and database, NER model analyzes user intents and identifies entities in the query. NEL model links typed entities to schema and cell values in database. Parser model leverages available semantic information and linking results and synthesizes tree-structured SQL queries based on dynamically generated grammar. Experiments on SQUALL, a newly released semantic parsing dataset, show that we can achieve 56.8{\%} execution accuracy on WikiTableQuestions (WTQ) test set, which outperforms the state-of-the-art model by 2.7{\%}.
null
null
10.18653/v1/2022.emnlp-industry.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,040
inproceedings
li-etal-2022-unsupervised-dense
Unsupervised Dense Retrieval for Scientific Articles
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.32/
Li, Dan and Yadav, Vikrant and Afzal, Zubair and Tsatsaronis, George
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
313--321
In this work, we build a dense retrieval based semantic search engine on scientific articles from Elsevier. The major challenge is that there is no labeled data for training and testing. We apply a state-of-the-art unsupervised dense retrieval model called Generative Pseudo Labeling that generates high-quality pseudo training labels. Furthermore, since the articles are unbalanced across different domains, we select passages from multiple domains to form balanced training data. For the evaluation, we create two test sets: one manually annotated and one automatically created from the meta-information of our data. We compare the semantic search engine with the currently deployed lexical search engine on the two test sets. The results of the experiment show that the semantic search engine trained with pseudo training labels can significantly improve search performance.
null
null
10.18653/v1/2022.emnlp-industry.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,041
inproceedings
govind-sohoney-2022-learning
Learning Geolocations for Cold-Start and Hard-to-Resolve Addresses via Deep Metric Learning
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.33/
Govind and Sohoney, Saurabh
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
322--331
With evergrowing digital adoption in the society and increasing demand for businesses to deliver to customers doorstep, the last mile hop of transportation planning poses unique challenges in emerging geographies with unstructured addresses. One of the crucial inputs to facilitate effective planning is the task of geolocating customer addresses. Existing systems operate by aggregating historical delivery locations or by resolving/matching addresses to known buildings and campuses to vend a high-precision geolocation. However, by design they fail to cater to a significant fraction of addresses which are new in the system and have inaccurate or missing building level information. We propose a framework to resolve these addresses (referred to as hard-to-resolve henceforth) to a shallower granularity termed as neighbourhood. Specifically, we propose a weakly supervised deep metric learning model to encode the geospatial semantics in address embeddings. We present empirical evaluation on India (IN) and the United Arab Emirates (UAE) hard-to-resolve addresses to show significant improvements in learning geolocations i.e., 22{\%} (IN) {\&} 55{\%} (UAE) reduction in delivery defects (where learnt geocode is Y meters away from actual location), and 43{\%} (IN) {\&} 90{\%} (UAE) reduction in 50th percentile (p50) distance between learnt and actual delivery locations over the existing production system.
null
null
10.18653/v1/2022.emnlp-industry.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,042
inproceedings
sehanobish-etal-2022-meta
Meta-learning Pathologies from Radiology Reports using Variance Aware Prototypical Networks
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.34/
Sehanobish, Arijit and Kannan, Kawshik and Abraham, Nabila and Das, Anasuya and Odry, Benjamin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
332--347
Large pretrained Transformer-based language models like BERT and GPT have changed the landscape of Natural Language Processing (NLP). However, fine tuning such models still requires a large number of training examples for each target task, thus annotating multiple datasets and training these models on various downstream tasks becomes time consuming and expensive. In this work, we propose a simple extension of the Prototypical Networks for few-shot text classification. Our main idea is to replace the class prototypes by Gaussians and introduce a regularization term that encourages the examples to be clustered near the appropriate class centroids. Experimental results show that our method outperforms various strong baselines on 13 public and 4 internal datasets. Furthermore, we use the class distributions as a tool for detecting potential out-of-distribution (OOD) data points during deployment.
null
null
10.18653/v1/2022.emnlp-industry.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,043
inproceedings
koleva-etal-2022-named
Named Entity Recognition in Industrial Tables using Tabular Language Models
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.35/
Koleva, Aneta and Ringsquandl, Martin and Buckley, Mark and Hasan, Rakeb and Tresp, Volker
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
348--356
Specialized transformer-based models for encoding tabular data have gained interest in academia. Although tabular data is omnipresent in industry, applications of table transformers are still missing. In this paper, we study how these models can be applied to an industrial Named Entity Recognition (NER) problem where the entities are mentioned in tabular-structured spreadsheets. The highly technical nature of spreadsheets as well as the lack of labeled data present major challenges for fine-tuning transformer-based models. Therefore, we develop a dedicated table data augmentation strategy based on available domain-specific knowledge graphs. We show that this boosts performance in our low-resource scenario considerably. Further, we investigate the benefits of tabular structure as inductive bias compared to tables as linearized sequences. Our experiments confirm that a table transformer outperforms other baselines and that its tabular inductive bias is vital for convergence of transformer-based models.
null
null
10.18653/v1/2022.emnlp-industry.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,044
inproceedings
chen-etal-2022-reinforced
Reinforced Question Rewriting for Conversational Question Answering
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.36/
Chen, Zhiyu and Zhao, Jie and Fang, Anjie and Fetahu, Besnik and Rokhlenko, Oleg and Malmasi, Shervin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
357--370
Conversational Question Answering (CQA) aims to answer questions contained within dialogues, which are not easily interpretable without context. Developing a model to rewrite conversational questions into self-contained ones is an emerging solution in industry settings as it allows using existing single-turn QA systems to avoid training a CQA model from scratch. Previous work trains rewriting models using human rewrites as supervision. However, such objectives are disconnected with QA models and therefore more human-like rewrites do not guarantee better QA performance. In this paper we propose using QA feedback to supervise the rewriting model with reinforcement learning. Experiments show that our approach can effectively improve QA performance over baselines for both extractive and retrieval QA. Furthermore, human evaluation shows that our method can generate more accurate and detailed rewrites when compared to human annotations.
null
null
10.18653/v1/2022.emnlp-industry.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,045
inproceedings
schroedl-etal-2022-improving
Improving Large-Scale Conversational Assistants using Model Interpretation based Training Sample Selection
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.37/
Schroedl, Stefan and Kumar, Manoj and Hajebi, Kiana and Ziyadi, Morteza and Venkatapathy, Sriram and Ramakrishna, Anil and Gupta, Rahul and Natarajan, Pradeep
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
371--378
This paper presents an approach to identify samples from live traffic where the customer implicitly communicated satisfaction with Alexa`s responses, by leveraging interpretations of model behavior. Such customer signals are noisy and adding a large number of samples from live traffic to training set makes re-training infeasible. Our work addresses these challenges by identifying a small number of samples that grow training set by {\textasciitilde}0.05{\%} while producing statistically significant improvements in both offline and online tests.
null
null
10.18653/v1/2022.emnlp-industry.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,046
inproceedings
zhong-etal-2022-improving-precancerous
Improving Precancerous Case Characterization via Transformer-based Ensemble Learning
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.38/
Zhong, Yizhen and Xiao, Jiajie and Vetterli, Thomas and Matin, Mahan and Loo, Ellen and Lin, Jimmy and Bourgon, Richard and Shapira, Ofer
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
379--389
The application of natural language processing (NLP) to cancer pathology reports has been focused on detecting cancer cases, largely ignoring precancerous cases. Improving the characterization of precancerous adenomas assists in developing diagnostic tests for early cancer detection and prevention, especially for colorectal cancer (CRC). Here we developed transformer-based deep neural network NLP models to perform the CRC phenotyping, with the goal of extracting precancerous lesion attributes and distinguishing cancer and precancerous cases. We achieved 0.914 macro-F1 scores for classifying patients into negative, non-advanced adenoma, advanced adenoma and CRC. We further improved the performance to 0.923 using an ensemble of classifiers for cancer status classification and lesion size named-entity recognition (NER). Our results demonstrated the potential of using NLP to leverage real-world health record data to facilitate the development of diagnostic tests for early cancer prevention.
null
null
10.18653/v1/2022.emnlp-industry.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,047
inproceedings
chen-etal-2022-developing
Developing Prefix-Tuning Models for Hierarchical Text Classification
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.39/
Chen, Lei and Chou, Houwei and Zhu, Xiaodan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
390--397
Hierarchical text classification (HTC) is a key problem and task in many industrial applications, which aims to predict labels organized in a hierarchy for given input text. For example, HTC can group the descriptions of online products into a taxonomy or organizing customer reviews into a hierarchy of categories. In real-life applications, while Pre-trained Language Models (PLMs) have dominated many NLP tasks, they face significant challenges too{---}the conventional fine-tuning process needs to modify and save models with a huge number of parameters. This is becoming more critical for HTC in both global and local modelling{---}the latter needs to learn multiple classifiers at different levels/nodes in a hierarchy. The concern will be even more serious since PLM sizes are continuing to increase in order to attain more competitive performances. Most recently, prefix tuning has become a very attractive technology by only tuning and saving a tiny set of parameters. Exploring prefix turning for HTC is hence highly desirable and has timely impact. In this paper, we investigate prefix tuning on HTC in two typical setups: local and global HTC. Our experiment shows that the prefix-tuning model only needs less than 1{\%} of parameters and can achieve performance comparable to regular full fine-tuning. We demonstrate that using contrastive learning in learning prefix vectors can further improve HTC performance.
null
null
10.18653/v1/2022.emnlp-industry.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,048
inproceedings
bis-etal-2022-paige
{PAIGE}: Personalized Adaptive Interactions Graph Encoder for Query Rewriting in Dialogue Systems
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.40/
Bi{\'s}, Daniel and Gupta, Saurabh and Hao, Jie and Fan, Xing and Guo, Chenlei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
398--408
Unexpected responses or repeated clarification questions from conversational agents detract from the users' experience with technology meant to streamline their daily tasks. To reduce these frictions, Query Rewriting (QR) techniques replace transcripts of faulty queries with alternatives that lead to responses thatsatisfy the users' needs. Despite their successes, existing QR approaches are limited in their ability to fix queries that require considering users' personal preferences. We improve QR by proposing Personalized Adaptive Interactions Graph Encoder (PAIGE).PAIGE is the first QR architecture that jointly models user`s affinities and query semantics end-to-end. The core idea is to represent previous user-agent interactions and world knowledge in a structured form {---} a heterogeneous graph {---} and apply message passing to propagate latent representations of users' affinities to refine utterance embeddings.Using these embeddings, PAIGE can potentially provide different rewrites given the same query for users with different preferences. Our model, trained without any human-annotated data, improves the rewrite retrieval precision of state-of-the-art baselines by 12.5{--}17.5{\%} while having nearly ten times fewer parameters.
null
null
10.18653/v1/2022.emnlp-industry.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,049
inproceedings
gee-etal-2022-fast
Fast Vocabulary Transfer for Language Model Compression
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.41/
Gee, Leonidas and Zugarini, Andrea and Rigutini, Leonardo and Torroni, Paolo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
409--416
Real-world business applications require a trade-off between language model performance and size. We propose a new method for model compression that relies on vocabulary transfer. We evaluate the method on various vertical domains and downstream tasks. Our results indicate that vocabulary transfer can be effectively used in combination with other compression techniques, yielding a significant reduction in model size and inference time while marginally compromising on performance.
null
null
10.18653/v1/2022.emnlp-industry.41
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,050
inproceedings
wanigasekara-etal-2022-multimodal
Multimodal Context Carryover
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.42/
Wanigasekara, Prashan and Gupta, Nalin and Yang, Fan and Barut, Emre and Raeesy, Zeynab and Qin, Kechen and Rawls, Stephen and Liu, Xinyue and Su, Chengwei and Sandiri, Spurthi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
417--428
Multi-modality support has become an integral part of creating a seamless user experience with modern voice assistants with smart displays. Users refer to images, video thumbnails, or the accompanying text descriptions on the screen through voice communication with AI powered devices. This raises the need to either augment existing commercial voice only dialogue systems with state-of-the-art multimodal components, or to introduce entirely new architectures; where the latter can lead to costly system revamps. To support the emerging visual navigation and visual product selection use cases, we propose to augment commercially deployed voice-only dialogue systems with additional multi-modal components. In this work, we present a novel yet pragmatic approach to expand an existing dialogue-based context carryover system (Chen et al., 2019a) in a voice assistant with state-of-the-art multimodal components to facilitate quick delivery of visual modality support with minimum changes. We demonstrate a 35{\%} accuracy improvement over the existing system on an in-house multi-modal visual navigation data set.
null
null
10.18653/v1/2022.emnlp-industry.42
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,051
inproceedings
fetahu-etal-2022-distilling
Distilling Multilingual Transformers into {CNN}s for Scalable Intent Classification
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.43/
Fetahu, Besnik and Veeragouni, Akash and Rokhlenko, Oleg and Malmasi, Shervin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
429--439
We describe an application of Knowledge Distillation used to distill and deploy multilingual Transformer models for voice assistants, enabling text classification for customers globally. Transformers have set new state-of-the-art results for tasks like intent classification, and multilingual models exploit cross-lingual transfer to allow serving requests across 100+ languages. However, their prohibitive inference time makes them impractical to deploy in real-world scenarios with low latency requirements, such as is the case of voice assistants. We address the problem of cross-architecture distillation of multilingual Transformers to simpler models, while maintaining multilinguality without performance degradation. Training multilingual student models has received little attention, and is our main focus. We show that a teacher-student framework, where the teacher`s unscaled activations (logits) on unlabelled data are used to supervise student model training, enables distillation of Transformers into efficient multilingual CNN models. Our student model achieves equivalent performance as the teacher, and outperforms a similar model trained on the labelled data used to train the teacher model. This approach has enabled us to accurately serve global customer requests at speed (18x improvement), scale, and low cost.
null
null
10.18653/v1/2022.emnlp-industry.43
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,052
inproceedings
obadinma-etal-2022-bringing
Bringing the State-of-the-Art to Customers: A Neural Agent Assistant Framework for Customer Service Support
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.44/
Obadinma, Stephen and Khan Khattak, Faiza and Wang, Shirley and Sidhorn, Tania and Lau, Elaine and Robertson, Sean and Niu, Jingcheng and Au, Winnie and Munim, Alif and Kalaiselvi Bhaskar, Karthik Raja
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
440--450
Building Agent Assistants that can help improve customer service support requires inputs from industry users and their customers, as well as knowledge about state-of-the-art Natural Language Processing (NLP) technology. We combine expertise from academia and industry to bridge the gap and build task/domain-specific Neural Agent Assistants (NAA) with three high-level components for: (1) Intent Identification, (2) Context Retrieval, and (3) Response Generation. In this paper, we outline the pipeline of the NAA`s core system and also present three case studies in which three industry partners successfully adapt the framework to find solutions to their unique challenges. Our findings suggest that a collaborative process is instrumental in spurring the development of emerging NLP models for Conversational AI tasks in industry. The full reference implementation code and results are available at \url{https://github.com/VectorInstitute/NAA}.
null
null
10.18653/v1/2022.emnlp-industry.44
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,053
inproceedings
el-kurdi-etal-2022-zero
Zero-Shot Dynamic Quantization for Transformer Inference
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.45/
El-kurdi, Yousef and Quinn, Jerry and Sil, Avi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
451--457
We introduce a novel run-time method for significantly reducing the accuracy loss associated with quantizing BERT-like models to 8-bit integers. Existing methods for quantizing models either modify the training procedure, or they require an additional calibration step to adjust parameters that also requires a selected held-out dataset. Our method permits taking advantage of quantization without the need for these adjustments. We present results on several NLP tasks demonstrating the usefulness of this technique.
null
null
10.18653/v1/2022.emnlp-industry.45
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,054
inproceedings
estes-etal-2022-fact
Fact Checking Machine Generated Text with Dependency Trees
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.46/
Estes, Alex and Vedula, Nikhita and Collins, Marcus and Cecil, Matt and Rokhlenko, Oleg
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
458--466
Factual and logical errors made by Natural Language Generation (NLG) systems limit their applicability in many settings. We study this problem in a conversational search and recommendation setting, and observe that we can often make two simplifying assumptions in this domain: (i) there exists a body of structured knowledge we can use for verifying factuality of generated text; and (ii) the text to be factually assessed typically has a well-defined structure and style. Grounded in these assumptions, we propose a fast, unsupervised and explainable technique, DepChecker, that assesses factuality of input text based on rules derived from structured knowledge patterns and dependency relations with respect to the input text. We show that DepChecker outperforms state-of-the-art, general purpose fact-checking techniques in this special, but important case.
null
null
10.18653/v1/2022.emnlp-industry.46
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,055
inproceedings
zalmout-li-2022-prototype
Prototype-Representations for Training Data Filtering in Weakly-Supervised Information Extraction
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.47/
Zalmout, Nasser and Li, Xian
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
467--474
The availability of high quality training data is still a bottleneck for the practical utilization of information extraction models, despite the breakthroughs in zero and few-shot learning techniques. This is further exacerbated for industry applications, where new tasks, domains, and specific use cases keep arising, which makes it impractical to depend on manually annotated data. Therefore, weak and distant supervision emerged as popular approaches to bootstrap training, utilizing labeling functions to guide the annotation process. Weakly-supervised annotation of training data is fast and efficient, however, it results in many irrelevant and out-of-context matches. This is a challenging problem that can degrade the performance in downstream models, or require a manual data cleaning step that can incur significant overhead. In this paper we present a prototype-based filtering approach, that can be utilized to denoise weakly supervised training data. The system is very simple, unsupervised, scalable, and requires little manual intervention, yet results in significant precision gains. We apply the technique in the task of attribute value extraction in e-commerce websites, and achieve up to 9{\%} gain in precision for the downstream models, with a minimal drop in recall.
null
null
10.18653/v1/2022.emnlp-industry.47
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,056
inproceedings
hao-etal-2022-cgf
{CGF}: Constrained Generation Framework for Query Rewriting in Conversational {AI}
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.48/
Hao, Jie and Liu, Yang and Fan, Xing and Gupta, Saurabh and Soltan, Saleh and Chada, Rakesh and Natarajan, Pradeep and Guo, Chenlei and Tur, Gokhan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
475--483
In conversational AI agents, Query Rewriting (QR) plays a crucial role in reducing user frictions and satisfying their daily demands. User frictions are caused by various reasons, such as errors in the conversational AI system, users' accent or their abridged language. In this work, we present a novel Constrained Generation Framework (CGF) for query rewriting at both global and personalized levels. It is based on the encoder-decoder framework, where the encoder takes the query and its previous dialogue turns as the input to form a context-enhanced representation, and the decoder uses constrained decoding to generate the rewrites based on the pre-defined global or personalized constrained decoding space. Extensive offline and online A/B experiments show that the proposed CGF significantly boosts the query rewriting performance.
null
null
10.18653/v1/2022.emnlp-industry.48
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,057
inproceedings
fu-etal-2022-entity
Entity-level Sentiment Analysis in Contact Center Telephone Conversations
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.49/
Fu, Xue-yong and Chen, Cheng and Laskar, Md Tahmid Rahman and Gardiner, Shayna and Hiranandani, Pooja and Tn, Shashi Bhushan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
484--491
Entity-level sentiment analysis predicts the sentiment about entities mentioned in a given text. It is very useful in a business context to understand user emotions towards certain entities, such as products or companies. In this paper, we demonstrate how we developed an entity-level sentiment analysis system that analyzes English telephone conversation transcripts in contact centers to provide business insight. We present two approaches, one entirely based on the transformer-based DistilBERT model, and another that uses a neural network supplemented with some heuristic rules.
null
null
10.18653/v1/2022.emnlp-industry.49
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,058
inproceedings
srinivasan-etal-2022-quill
{QUILL}: Query Intent with Large Language Models using Retrieval Augmentation and Multi-stage Distillation
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.50/
Srinivasan, Krishna and Raman, Karthik and Samanta, Anupam and Liao, Lingrui and Bertelli, Luca and Bendersky, Michael
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
492--501
Large Language Models (LLMs) have shown impressive results on a variety of text understanding tasks. Search queries though pose a unique challenge, given their short-length and lack of nuance or context. Complicated feature engineering efforts do not always lead to downstream improvements as their performance benefits may be offset by increased complexity of knowledge distillation. Thus, in this paper we make the following contributions: (1) We demonstrate that Retrieval Augmentation of queries provides LLMs with valuable additional context enabling improved understanding. While Retrieval Augmentation typically increases latency of LMs (thus hurting distillation efficacy), (2) we provide a practical and effective way of distilling Retrieval Augmentation LLMs. Specifically, we use a novel two-stage distillation approach that allows us to carry over the gains of retrieval augmentation, without suffering the increased compute typically associated with it. (3) We demonstrate the benefits of the proposed approach (QUILL) on a billion-scale, real-world query understanding system resulting in huge gains. Via extensive experiments, including on public benchmarks, we believe this work offers a recipe for practical use of retrieval-augmented query understanding.
null
null
10.18653/v1/2022.emnlp-industry.50
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,059
inproceedings
qian-etal-2022-distinguish
Distinguish Sense from Nonsense: Out-of-Scope Detection for Virtual Assistants
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.51/
Qian, Cheng and Qi, Haode and Wang, Gengyu and Kunc, Ladislav and Potdar, Saloni
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
502--511
Out of Scope (OOS) detection in Conversational AI solutions enables a chatbot to handle a conversation gracefully when it is unable to make sense of the end-user query. Accurately tagging a query as out-of-domain is particularly hard in scenarios when the chatbot is not equipped to handle a topic which has semantic overlap with an existing topic it is trained on. We propose a simple yet effective OOS detection method that outperforms standard OOS detection methods in a real-world deployment of virtual assistants. We discuss the various design and deployment considerations for a cloud platform solution to train virtual assistants and deploy them at scale. Additionally, we propose a collection of datasets that replicates real-world scenarios and show comprehensive results in various settings using both offline and online evaluation metrics.
null
null
10.18653/v1/2022.emnlp-industry.51
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,060
inproceedings
lei-etal-2022-plato
{PLATO}-Ad: A Unified Advertisement Text Generation Framework with Multi-Task Prompt Learning
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.52/
Lei, Zeyang and Zhang, Chao and Xu, Xinchao and Wu, Wenquan and Niu, Zheng-yu and Wu, Hua and Wang, Haifeng and Yang, Yi and Li, Shuanglong
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
512--520
Online advertisement text generation aims at generating attractive and persuasive text ads to appeal to users clicking ads or purchasing products. While pretraining-based models have achieved remarkable success in generating high-quality text ads, some challenges still remain, such as ad generation in low-resource scenarios and training efficiency for multiple ad tasks. In this paper, we propose a novel unified text ad generation framework with multi-task prompt learning, called PLATO-Ad, totackle these problems. Specifically, we design a three-phase transfer learning mechanism to tackle the low-resource ad generation problem. Furthermore, we present a novel multi-task prompt learning mechanism to efficiently utilize a single lightweight model to solve multiple ad generation tasks without loss of performance compared to training a separate model for each task. Finally, we conduct offline and online evaluations and experiment results show that PLATO-Ad significantly outperforms the state-of-the-art on both offline and online metrics. PLATO-Ad has been deployed in a leading advertising platform with 3.5{\%} CTR improvement on search ad descriptions and 10.4{\%} CTR improvement on feed ad titles.
null
null
10.18653/v1/2022.emnlp-industry.52
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,061
inproceedings
gupta-etal-2022-dense
Dense Feature Memory Augmented Transformers for {COVID}-19 Vaccination Search Classification
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.53/
Gupta, Jai and Tay, Yi and Kamath, Chaitanya and Tran, Vinh and Metzler, Donald and Bavadekar, Shailesh and Sun, Mimi and Gabrilovich, Evgeniy
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
521--530
With the devastating outbreak of COVID-19, vaccines are one of the crucial lines of defense against mass infection in this global pandemic. Given the protection they provide, vaccines are becoming mandatory in certain social and professional settings. This paper presents a classification model for detecting COVID-19 vaccination related search queries, a machine learning model that is used to generate search insights for COVID-19 vaccinations. The proposed method combines and leverages advancements from modern state-of-the-art (SOTA) natural language understanding (NLU) techniques such as pretrained Transformers with traditional dense features. We propose a novel approach of considering dense features as memory tokens that the model can attend to. We show that this new modeling approach enables a significant improvement to the Vaccine Search Insights (VSI) task, improving a strong well-established gradient-boosting baseline by relative +15{\%} improvement in F1 score and +14{\%} in precision.
null
null
10.18653/v1/2022.emnlp-industry.53
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,062
inproceedings
park-lee-2022-full
Full-Stack Information Extraction System for Cybersecurity Intelligence
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.54/
Park, Youngja and Lee, Taesung
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
531--539
Due to rapidly growing cyber-attacks and security vulnerabilities, many reports on cyber-threat intelligence (CTI) are being published daily. While these reports can help security analysts to understand on-going cyber threats,the overwhelming amount of information makes it difficult to digest the information in a timely manner. This paper presents, SecIE, an industrial-strength full-stack information extraction (IE) system for the security domain. SecIE can extract a large number of security entities, relations and the temporal information of the relations, which is critical for cyberthreat investigations. Our evaluation with 133 labeled threat reports containing 108,021 tokens shows thatSecIE achieves over 92{\%} F1-score for entity extraction and about 70{\%} F1-score for relation extraction. We also showcase how SecIE can be used for downstream security applications.
null
null
10.18653/v1/2022.emnlp-industry.54
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,063
inproceedings
nayak-garera-2022-deploying
Deploying Unified {BERT} Moderation Model for {E}-Commerce Reviews
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.55/
Nayak, Ravindra and Garera, Nikesh
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
540--547
Moderation of user-generated e-commerce content has become crucial due to the large and diverse user base on the platforms. Product reviews and ratings have become an integral part of the shopping experience to build trust among users. Due to the high volume of reviews generated on a vast catalog of products, manual moderation is infeasible, making machine moderation a necessity. In this work, we described our deployed system and models for automated moderation of user-generated content. At the heart of our approach, we outline several rejection reasons for review {\&} rating moderation and explore a unified BERT model to moderate them. We convey the importance of product vertical embeddings for the relevancy of the review for a given product and highlight the advantages of pre-training the BERT models with monolingual data to cope with the domain gap in the absence of huge labelled datasets. We observe a 4.78{\%} F1 increase with less labelled data and a 2.57{\%} increase in F1 score on the review data compared to the publicly available BERT-based models. Our best model In-House-BERT-vertical sends only 5.89{\%} of total reviews to manual moderation and has been deployed in production serving live traffic for millions of users.
null
null
10.18653/v1/2022.emnlp-industry.55
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,064
inproceedings
zhou-etal-2022-simans
{S}im{ANS}: Simple Ambiguous Negatives Sampling for Dense Text Retrieval
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.56/
Zhou, Kun and Gong, Yeyun and Liu, Xiao and Zhao, Wayne Xin and Shen, Yelong and Dong, Anlei and Lu, Jingwen and Majumder, Rangan and Wen, Ji-rong and Duan, Nan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
548--559
Sampling proper negatives from a large document pool is vital to effectively train a dense retrieval model. However, existing negative sampling strategies suffer from the uninformative or false negative problem. In this work, we empirically show that according to the measured relevance scores, the negatives ranked around the positives are generally more informative and less likely to be false negatives. Intuitively, these negatives are not too hard (\textit{may be false negatives}) or too easy (\textit{uninformative}). They are the ambiguous negatives and need more attention during training.Thus, we propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.Extensive experiments on four public and one industry datasets show the effectiveness of our approach.We made the code and models publicly available in \url{https://github.com/microsoft/SimXNS}.
null
null
10.18653/v1/2022.emnlp-industry.56
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,065
inproceedings
zhang-etal-2022-revisiting
Revisiting and Advancing {C}hinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.57/
Zhang, Taolin and Dong, Junwei and Wang, Jianing and Wang, Chengyu and Wang, Ang and Liu, Yinghui and Huang, Jun and Li, Yong and He, Xiaofeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
560--570
Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge bases, and/or linguistic knowledge from syntactic or dependency analysis. Unlike English, there is a lack of high-performing open-source Chinese KEPLMs in the natural language processing (NLP) community to support various language understanding applications. In this paper, we revisit and advance the development of Chinese natural language understanding with a series of novel Chinese KEPLMs released in various parameter sizes, namely CKBERT (Chinese knowledge-enhanced BERT). Specifically, both relational and linguistic knowledge is effectively injected into CKBERT based on two novel pre-training tasks, i.e., linguistic-aware masked language modeling and contrastive multi-hop relation modeling. Based on the above two pre-training paradigms and our in-house implemented TorchAccelerator, we have pre-trained base (110M), large (345M) and huge (1.3B) versions of CKBERT efficiently on GPU clusters. Experiments demonstrate that CKBERT consistently outperforms strong baselines for Chinese over various benchmark NLP tasks and in terms of different model sizes.
null
null
10.18653/v1/2022.emnlp-industry.57
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,066
inproceedings
oikawa-etal-2022-stacking
A Stacking-based Efficient Method for Toxic Language Detection on Live Streaming Chat
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.58/
Oikawa, Yuto and Nakayama, Yuki and Murakami, Koji
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
571--578
In a live streaming chat on a video streaming service, it is crucial to filter out toxic comments with online processing to prevent users from reading comments in real-time. However, recent toxic language detection methods rely on deep learning methods, which can not be scalable considering inference speed. Also, these methods do not consider constraints of computational resources expected depending on a deployed system (e.g., no GPU resource).This paper presents an efficient method for toxic language detection that is aware of real-world scenarios. Our proposed architecture is based on partial stacking that feeds initial results with low confidence to meta-classifier. Experimental results show that our method achieves a much faster inference speed than BERT-based models with comparable performance.
null
null
10.18653/v1/2022.emnlp-industry.58
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,067
inproceedings
goyal-etal-2022-end
End-to-End Speech to Intent Prediction to improve {E}-commerce Customer Support Voicebot in {H}indi and {E}nglish
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.59/
Goyal, Abhinav and Singh, Anupam and Garera, Nikesh
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
579--586
Automation of on-call customer support relies heavily on accurate and efficient speech-to-intent (S2I) systems. Building such systems using multi-component pipelines can pose various challenges because they require large annotated datasets, have higher latency, and have complex deployment. These pipelines are also prone to compounding errors. To overcome these challenges, we discuss an end-to-end (E2E) S2I model for customer support voicebot task in a bilingual setting. We show how we can solve E2E intent classification by leveraging a pre-trained automatic speech recognition (ASR) model with slight modification and fine-tuning on small annotated datasets. Experimental results show that our best E2E model outperforms a conventional pipeline by a relative {\textasciitilde}27{\%} on the F1 score.
null
null
10.18653/v1/2022.emnlp-industry.59
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,068
inproceedings
cai-etal-2022-pile
{PILE}: Pairwise Iterative Logits Ensemble for Multi-Teacher Labeled Distillation
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.60/
Cai, Lianshang and Zhang, Linhao and Ma, Dehong and Fan, Jun and Shi, Daiting and Wu, Yi and Cheng, Zhicong and Gu, Simiu and Yin, Dawei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
587--595
Pre-trained language models have become a crucial part of ranking systems and achieved very impressive effects recently. To maintain high performance while keeping efficient computations, knowledge distillation is widely used. In this paper, we focus on two key questions in knowledge distillation for ranking models: 1) how to ensemble knowledge from multi-teacher; 2) how to utilize the label information of data in the distillation process. We propose a unified algorithm called Pairwise Iterative Logits Ensemble (PILE) to tackle these two questions simultaneously. PILE ensembles multi-teacher logits supervised by label information in an iterative way and achieved competitive performance in both offline and online experiments. The proposed method has been deployed in a real-world commercial search system.
null
null
10.18653/v1/2022.emnlp-industry.60
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,069
inproceedings
tutubalina-etal-2022-comprehensive
A Comprehensive Evaluation of Biomedical Entity-centric Search
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.61/
Tutubalina, Elena and Miftahutdinov, Zulfat and Muravlev, Vladimir and Shneyderman, Anastasia
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
596--605
Biomedical information retrieval has often been studied as a task of detecting whether a system correctly detects entity spans and links these entities to concepts from a given terminology. Most academic research has focused on evaluation of named entity recognition (NER) and entity linking (EL) models which are key components to recognizing diseases and genes in PubMed abstracts. In this work, we perform a fine-grained evaluation intended to understand the efficiency of state-of-the-art BERT-based information extraction (IE) architecture as a biomedical search engine. We present a novel manually annotated dataset of abstracts for disease and gene search. The dataset contains 23K query-abstract pairs, where 152 queries are selected from logs of our target discovery platform and PubMed abstracts annotated with relevance judgments. Specifically, the query list also includes a subset of concepts with at least one ambiguous concept name. As a baseline, we use off-she-shelf Elasticsearch with BM25. Our experiments on NER, EL, and retrieval in a zero-shot setup show the neural IE architecture shows superior performance for both disease and gene concept queries.
null
null
10.18653/v1/2022.emnlp-industry.61
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,070
inproceedings
morishita-etal-2022-domain
Domain Adaptation of Machine Translation with Crowdworkers
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.62/
Morishita, Makoto and Suzuki, Jun and Nagata, Masaaki
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
606--618
Although a machine translation model trained with a large in-domain parallel corpus achieves remarkable results, it still works poorly when no in-domain data are available. This situation restricts the applicability of machine translation when the target domain`s data are limited. However, there is great demand for high-quality domain-specific machine translation models for many domains. We propose a framework that efficiently and effectively collects parallel sentences in a target domain from the web with the help of crowdworkers.With the collected parallel data, we can quickly adapt a machine translation model to the target domain. Our experiments show that the proposed method can collect target-domain parallel data over a few days at a reasonable cost. We tested it with five domains, and the domain-adapted model improved the BLEU scores to +19.7 by an average of +7.8 points compared to a general-purpose translation model.
null
null
10.18653/v1/2022.emnlp-industry.62
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,071
inproceedings
yoon-etal-2022-biomedical
Biomedical {NER} for the Enterprise with Distillated {BERN}2 and the Kazu Framework
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.63/
Yoon, Wonjin and Jackson, Richard and Ford, Elliot and Poroshin, Vladimir and Kang, Jaewoo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
619--626
In order to assist the drug discovery/development process, pharmaceutical companies often apply biomedical NER and linking techniques over internal and public corpora. Decades of study of the field of BioNLP has produced a plethora of algorithms, systems and datasets. However, our experience has been that no single open source system meets all the requirements of a modern pharmaceutical company. In this work, we describe these requirements according to our experience of the industry, and present Kazu, a highly extensible, scalable open source framework designed to support BioNLP for the pharmaceutical sector. Kazu is a built around a computationally efficient version of the BERN2 NER model (TinyBERN2), and subsequently wraps several other BioNLP technologies into one coherent system.
null
null
10.18653/v1/2022.emnlp-industry.63
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,072
inproceedings
patil-garera-2022-large
Large-scale Machine Translation for {I}ndian Languages in {E}-commerce under Low Resource Constraints
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.64/
Patil, Amey and Garera, Nikesh
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
627--634
The democratization of e-commerce platforms has moved an increasingly diversified Indian user base to shop online. We have deployed reliable and precise large-scale Machine Translation systems for several Indian regional languages in this work. Building such systems is a challenge because of the low-resource nature of the Indian languages. We develop a structured model development pipeline as a closed feedback loop with external manual feedback through an Active Learning component. We show strong synthetic parallel data generation capability and consistent improvements to the model over iterations. Starting with 1.2M parallel pairs for English-Hindi we have compiled a corpus with 400M+ synthetic high quality parallel pairs across different domains. Further, we need colloquial translations to preserve the intent and friendliness of English content in regional languages, and make it easier to understand for our users. We perform robust and effective domain adaptation steps to achieve colloquial such translations. Over iterations, we show 9.02 BLEU points improvement for English to Hindi translation model. Along with Hindi, we show that the overall approach and best practices extends well to other Indian languages, resulting in deployment of our models across 7 Indian Languages.
null
null
10.18653/v1/2022.emnlp-industry.64
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,073
inproceedings
eklund-forsman-2022-topic
Topic Modeling by Clustering Language Model Embeddings: Human Validation on an Industry Dataset
Li, Yunyao and Lazaridou, Angeliki
dec
2022
Abu Dhabi, UAE
Association for Computational Linguistics
https://aclanthology.org/2022.emnlp-industry.65/
Eklund, Anton and Forsman, Mona
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
635--643
Topic models are powerful tools to get an overview of large collections of text data, a situation that is prevalent in industry applications. A rising trend within topic modeling is to directly cluster dimension-reduced embeddings created with pretrained language models. It is difficult to evaluate these models because there is no ground truth and automatic measurements may not mimic human judgment. To address this problem, we created a tool called STELLAR for interactive topic browsing which we used for human evaluation of topics created from a real-world dataset used in industry. Embeddings created with BERT were used together with UMAP and HDBSCAN to model the topics. The human evaluation found that our topic model creates coherent topics. The following discussion revolves around the requirements of industry and what research is needed for production-ready systems.
null
null
10.18653/v1/2022.emnlp-industry.65
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,074
inproceedings
mohanty-2022-deftri
{DEFT}ri: A Few-Shot Label Fused Contextual Representation Learning For Product Defect Triage in e-Commerce
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.1/
Mohanty, Ipsita
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
1--7
Defect Triage is a time-sensitive and critical process in a large-scale agile software development lifecycle for e-commerce. Inefficiencies arising from human and process dependencies in this domain have motivated research in automated approaches using machine learning to accurately assign defects to qualified teams. This work proposes a novel framework for automated defect triage (DEFTri) using fine-tuned state-of-the-art pre-trained BERT on labels fused text embeddings to improve contextual representations from human-generated product defects. For our multi-label text classification defect triage task, we also introduce a Walmart proprietary dataset of product defects using weak supervision and adversarial learning, in a few-shot setting.
null
null
10.18653/v1/2022.ecnlp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,076
inproceedings
wang-etal-2022-interactive
Interactive Latent Knowledge Selection for {E}-Commerce Product Copywriting Generation
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.2/
Wang, Zeming and Zou, Yanyan and Fang, Yuejian and Chen, Hongshen and Ma, Mian and Ding, Zhuoye and Long, Bo
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
8--19
As the multi-modal e-commerce is thriving, high-quality advertising product copywriting has gain more attentions, which plays a crucial role in the e-commerce recommender, advertising and even search platforms. The advertising product copywriting is able to enhance the user experience by highlighting the product`s characteristics with textual descriptions and thus to improve the likelihood of user click and purchase. Automatically generating product copywriting has attracted noticeable interests from both academic and industrial communities, where existing solutions merely make use of a product`s title and attribute information to generate its corresponding description. However, in addition to the product title and attributes, we observe that there are various auxiliary descriptions created by the shoppers or marketers in the e-commerce platforms (namely human knowledge), which contains valuable information for product copywriting generation, yet always accompanying lots of noises. In this work, we propose a novel solution to automatically generating product copywriting that involves all the title, attributes and denoised auxiliary knowledge. To be specific, we design an end-to-end generation framework equipped with two variational autoencoders that works interactively to select informative human knowledge and generate diverse copywriting.
null
null
10.18653/v1/2022.ecnlp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,077
inproceedings
liu-etal-2022-leveraging
Leveraging Seq2seq Language Generation for Multi-level Product Issue Identification
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.3/
Liu, Yang and Chordia, Varnith and Li, Hua and Fazeli Dehkordy, Siavash and Sun, Yifei and Gao, Vincent and Zhang, Na
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
20--28
In a leading e-commerce business, we receive hundreds of millions of customer feedback from different text communication channels such as product reviews. The feedback can contain rich information regarding customers' dissatisfaction in the quality of goods and services. To harness such information to better serve customers, in this paper, we created a machine learning approach to automatically identify product issues and uncover root causes from the customer feedback text. We identify issues at two levels: coarse grained (L-Coarse) and fine grained (L-Granular). We formulate this multi-level product issue identification problem as a seq2seq language generation problem. Specifically, we utilize transformer-based seq2seq models due to their versatility and strong transfer-learning capability. We demonstrate that our approach is label efficient and outperforms the traditional approach such as multi-class multi-label classification formulation. Based on human evaluation, our fine-tuned model achieves 82.1{\%} and 95.4{\%} human-level performance for L-Coarse and L-Granular issue identification, respectively. Furthermore, our experiments illustrate that the model can generalize to identify unseen L-Granular issues.
null
null
10.18653/v1/2022.ecnlp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,078
inproceedings
kondadadi-etal-2022-data
Data Quality Estimation Framework for Faster Tax Code Classification
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.4/
Kondadadi, Ravi and Williams, Allen and Nicolov, Nicolas
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
29--34
This paper describes a novel framework to estimate the data quality of a collection of product descriptions to identify required relevant information for accurate product listing classification for tax-code assignment. Our Data Quality Estimation (DQE) framework consists of a Question Answering (QA) based attribute value extraction model to identify missing attributes and a classification model to identify bad quality records. We show that our framework can accurately predict the quality of product descriptions. In addition to identifying low-quality product listings, our framework can also generate a detailed report at a category level showing missing product information resulting in a better customer experience.
null
null
10.18653/v1/2022.ecnlp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,079
inproceedings
dong-etal-2022-cml
{CML}: A Contrastive Meta Learning Method to Estimate Human Label Confidence Scores and Reduce Data Collection Cost
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.5/
Dong, Bo and Wang, Yiyi and Sun, Hanbo and Wang, Yunji and Hashemi, Alireza and Du, Zheng
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
35--43
Deep neural network models are especially susceptible to noise in annotated labels. In the real world, annotated data typically contains noise caused by a variety of factors such as task difficulty, annotator experience, and annotator bias. Label quality is critical for label validation tasks; however, correcting for noise by collecting more data is often costly. In this paper, we propose a contrastive meta-learning framework (CML) to address the challenges introduced by noisy annotated data, specifically in the context of natural language processing. CML combines contrastive and meta learning to improve the quality of text feature representations. Meta-learning is also used to generate confidence scores to assess label quality. We demonstrate that a model built on CML-filtered data outperforms a model built on clean data. Furthermore, we perform experiments on deidentified commercial voice assistant datasets and demonstrate that our model outperforms several SOTA approaches.
null
null
10.18653/v1/2022.ecnlp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,080
inproceedings
bagheri-garakani-etal-2022-improving
Improving Relevance Quality in Product Search using High-Precision Query-Product Semantic Similarity
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.6/
Bagheri Garakani, Alireza and Yang, Fan and Hua, Wen-Yu and Chen, Yetian and Momma, Michinari and Deng, Jingyuan and Gao, Yan and Sun, Yi
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
44--48
Ensuring relevance quality in product search is a critical task as it impacts the customer`s ability to find intended products in the short-term as well as the general perception and trust of the e-commerce system in the long term. In this work we leverage a high-precision cross-encoder BERT model for semantic similarity between customer query and products and survey its effectiveness for three ranking applications where offline-generated scores could be used: (1) as an offline metric for estimating relevance quality impact, (2) as a re-ranking feature covering head/torso queries, and (3) as a training objective for optimization. We present results on effectiveness of this strategy for the large e-commerce setting, which has general applicability for choice of other high-precision models and tasks in ranking.
null
null
10.18653/v1/2022.ecnlp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,081
inproceedings
jain-etal-2022-comparative
Comparative Snippet Generation
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.7/
Jain, Saurabh and Miao, Yisong and Kan, Min-Yen
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
49--57
We model products' reviews to generate comparative responses consisting of positive and negative experiences regarding the product. Specifically, we generate a single-sentence, comparative response from a given positive and a negative opinion. We contribute the first dataset for this task of Comparative Snippet Generation from contrasting opinions regarding a product, and an analysis of performance of a pre-trained BERT model to generate such snippets.
null
null
10.18653/v1/2022.ecnlp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,082
inproceedings
shido-etal-2022-textual
Textual Content Moderation in {C}2{C} Marketplace
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.8/
Shido, Yusuke and Liu, Hsien-Chi and Umezawa, Keisuke
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
58--62
Automatic monitoring systems for inappropriate user-generated messages have been found to be effective in reducing human operation costs in Consumer to Consumer (C2C) marketplace services, in which customers send messages directly to other customers. We propose a lightweight neural network that takes a conversation as input, which we deployed to a production service. Our results show that the system reduced the human operation costs to less than one-sixth compared to the conventional rule-based monitoring at Mercari.
null
null
10.18653/v1/2022.ecnlp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,083
inproceedings
yang-etal-2022-spelling
Spelling Correction using Phonetics in {E}-commerce Search
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.9/
Yang, Fan and Bagheri Garakani, Alireza and Teng, Yifei and Gao, Yan and Liu, Jia and Deng, Jingyuan and Sun, Yi
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
63--67
In E-commerce search, spelling correction plays an important role to find desired products for customers in processing user-typed search queries. However, resolving phonetic errors is a critical but much overlooked area. The query with phonetic spelling errors tends to appear correct based on pronunciation but is nonetheless inaccurate in spelling (e.g., {\textquotedblleft}bluetooth sound system{\textquotedblright} vs. {\textquotedblleft}blutut sant sistam{\textquotedblright}) with numerous noisy forms and sparse occurrences. In this work, we propose a generalized spelling correction system integrating phonetics to address phonetic errors in E-commerce search without additional latency cost. Using India (IN) E-commerce market for illustration, the experiment shows that our proposed phonetic solution significantly improves the F1 score by 9{\%}+ and recall of phonetic errors by 8{\%}+. This phonetic spelling correction system has been deployed to production, currently serving hundreds of millions of customers.
null
null
10.18653/v1/2022.ecnlp-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,084
inproceedings
beygi-etal-2022-logical
Logical Reasoning for Task Oriented Dialogue Systems
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.10/
Beygi, Sajjad and Fazel-Zarandi, Maryam and Cervone, Alessandra and Krishnan, Prakash and Jonnalagadda, Siddhartha
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
68--79
In recent years, large pretrained models have been used in dialogue systems to improve successful task completion rates. However, lack of reasoning capabilities of dialogue platforms make it difficult to provide relevant and fluent responses, unless the designers of a conversational experience spend a considerable amount of time implementing these capabilities in external rule based modules. In this work, we propose a novel method to fine-tune pretrained transformer models such as Roberta and T5, to reason over a set of facts in a given dialogue context. Our method includes a synthetic data generation mechanism which helps the model learn logical relations, such as comparison between list of numerical values, inverse relations (and negation), inclusion and exclusion for categorical attributes, and application of a combination of attributes over both numerical and categorical values, and spoken form for numerical values, without need for additional training data. We show that the transformer based model can perform logical reasoning to answer questions when the dialogue context contains all the required information, otherwise it is able to extract appropriate constraints to pass to downstream components (e.g. a knowledge base) when partial information is available. We observe that transformer based models such as UnifiedQA-T5 can be fine-tuned to perform logical reasoning (such as numerical and categorical attributes' comparison) over attributes seen at training time (e.g., accuracy of 90{\%}+ for comparison of smaller than kmax=5 values over heldout test dataset).
null
null
10.18653/v1/2022.ecnlp-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,085
inproceedings
kumar-etal-2022-cova
{C}o{VA}: Context-aware Visual Attention for Webpage Information Extraction
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.11/
Kumar, Anurendra and Morabia, Keval and Wang, William and Chang, Kevin and Schwing, Alex
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
80--90
Webpage information extraction (WIE) is an important step to create knowledge bases. For this, classical WIE methods leverage the Document Object Model (DOM) tree of a website. However, use of the DOM tree poses significant challenges as context and appearance are encoded in an abstract manner. To address this challenge we propose to reformulate WIE as a context-aware Webpage Object Detection task. Specifically, we develop a Context-aware Visual Attention-based (CoVA) detection pipeline which combines appearance features with syntactical structure from the DOM tree. To study the approach we collect a new large-scale datase of e-commerce websites for which we manually annotate every web element with four labels: product price, product title, product image and others. On this dataset we show that the proposed CoVA approach is a new challenging baseline which improves upon prior state-of-the-art methods.
null
null
10.18653/v1/2022.ecnlp-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,086
inproceedings
fuchs-acriche-2022-product
Product Titles-to-Attributes As a Text-to-Text Task
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.12/
Fuchs, Gilad and Acriche, Yoni
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
91--98
Online marketplaces use attribute-value pairs, such as brand, size, size type, color, etc. to help define important and relevant facts about a listing. These help buyers to curate their search results using attribute filtering and overall create a richer experience. Although their critical importance for listings' discoverability, getting sellers to input tens of different attribute-value pairs per listing is costly and often results in missing information. This can later translate to the unnecessary removal of relevant listings from the search results when buyers are filtering by attribute values. In this paper we demonstrate using a Text-to-Text hierarchical multi-label ranking model framework to predict the most relevant attributes per listing, along with their expected values, using historic user behavioral data. This solution helps sellers by allowing them to focus on verifying information on attributes that are likely to be used by buyers, and thus, increase the expected recall for their listings. Specifically for eBay`s case we show that using this model can improve the relevancy of the attribute extraction process by 33.2{\%} compared to the current highly-optimized production system. Apart from the empirical contribution, the highly generalized nature of the framework presented in this paper makes it relevant for many high-volume search-driven websites.
null
null
10.18653/v1/2022.ecnlp-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,087
inproceedings
shen-etal-2022-product
Product Answer Generation from Heterogeneous Sources: A New Benchmark and Best Practices
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.13/
Shen, Xiaoyu and Barlacchi, Gianni and Del Tredici, Marco and Cheng, Weiwei and Byrne, Bill and Gispert, Adri{\`a}
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
99--110
It is of great value to answer product questions based on heterogeneous information sources available on web product pages, e.g., semi-structured attributes, text descriptions, user-provided contents, etc. However, these sources have different structures and writing styles, which poses challenges for (1) evidence ranking, (2) source selection, and (3) answer generation. In this paper, we build a benchmark with annotations for both evidence selection and answer generation covering 6 information sources. Based on this benchmark, we conduct a comprehensive study and present a set of best practices. We show that all sources are important and contribute to answering questions. Handling all sources within one single model can produce comparable confidence scores across sources and combining multiple sources for training always helps, even for sources with totally different structures. We further propose a novel data augmentation method to iteratively create training samples for answer generation, which achieves close-to-human performance with only a few thousandannotations. Finally, we perform an in-depth error analysis of model predictions and highlight the challenges for future research.
null
null
10.18653/v1/2022.ecnlp-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,088
inproceedings
shen-etal-2022-semipqa
semi{PQA}: A Study on Product Question Answering over Semi-structured Data
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.14/
Shen, Xiaoyu and Barlacchi, Gianni and Del Tredici, Marco and Cheng, Weiwei and Gispert, Adri{\`a}
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
111--120
Product question answering (PQA) aims to automatically address customer questions to improve their online shopping experience. Current research mainly focuses on finding answers from either unstructured text, like product descriptions and user reviews, or structured knowledge bases with pre-defined schemas. Apart from the above two sources, a lot of product information is represented in a semi-structured way, e.g., key-value pairs, lists, tables, json and xml files, etc. These semi-structured data can be a valuable answer source since they are better organized than free text, while being easier to construct than structured knowledge bases. However, little attention has been paid to them. To fill in this blank, here we study how to effectively incorporate semi-structured answer sources for PQA and focus on presenting answers in a natural, fluent sentence. To this end, we present semiPQA: a dataset to benchmark PQA over semi-structured data. It contains 11,243 written questions about json-formatted data covering 320 unique attribute types. Each data point is paired with manually-annotated text that describes its contents, so that we can train a neural answer presenter to present the data in a natural way. We provide baseline results and a deep analysis on the successes and challenges of leveraging semi-structured data for PQA. In general, state-of-the-art neural models can perform remarkably well when dealing with seen attribute types. For unseen attribute types, however, a noticeable drop is observed for both answer presentation and attribute ranking.
null
null
10.18653/v1/2022.ecnlp-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,089
inproceedings
kew-volk-2022-improving
Improving Specificity in Review Response Generation with Data-Driven Data Filtering
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.15/
Kew, Tannon and Volk, Martin
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
121--133
Responding to online customer reviews has become an essential part of successfully managing and growing a business both in e-commerce and the hospitality and tourism sectors. Recently, neural text generation methods intended to assist authors in composing responses have been shown to deliver highly fluent and natural looking texts. However, they also tend to learn a strong, undesirable bias towards generating overly generic, one-size-fits-all outputs to a wide range of inputs. While this often results in {\textquoteleft}safe', high-probability responses, there are many practical settings in which greater specificity is preferable. In this work we examine the task of generating more specific responses for online reviews in the hospitality domain by identifying generic responses in the training data, filtering them and fine-tuning the generation model. We experiment with a range of data-driven filtering methods and show through automatic and human evaluation that, despite a 60{\%} reduction in the amount of training data, filtering helps to derive models that are capable of generating more specific, useful responses.
null
null
10.18653/v1/2022.ecnlp-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,090
inproceedings
chen-etal-2022-extreme
Extreme Multi-Label Classification with Label Masking for Product Attribute Value Extraction
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.16/
Chen, Wei-Te and Xia, Yandi and Shinzato, Keiji
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
134--140
Although most studies have treated attribute value extraction (AVE) as named entity recognition, these approaches are not practical in real-world e-commerce platforms because they perform poorly, and require canonicalization of extracted values. Furthermore, since values needed for actual services is static in many attributes, extraction of new values is not always necessary. Given the above, we formalize AVE as extreme multi-label classification (XMC). A major problem in solving AVE as XMC is that the distribution between positive and negative labels for products is heavily imbalanced. To mitigate the negative impact derived from such biased distribution, we propose label masking, a simple and effective method to reduce the number of negative labels in training. We exploit attribute taxonomy designed for e-commerce platforms to determine which labels are negative for products. Experimental results using a dataset collected from a Japanese e-commerce platform demonstrate that the label masking improves micro and macro F$_1$ scores by 3.38 and 23.20 points, respectively.
null
null
10.18653/v1/2022.ecnlp-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,091
inproceedings
zhu-etal-2022-enhanced
Enhanced Representation with Contrastive Loss for Long-Tail Query Classification in e-commerce
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.17/
Zhu, Lvxing and Chen, Hao and Wei, Chao and Zhang, Weiru
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
141--150
Query classification is a fundamental task in an e-commerce search engine, which assigns one or multiple predefined product categories in response to each search query. Taking click-through logs as training data in deep learning methods is a common and effective approach for query classification. However, the frequency distribution of queries typically has long-tail property, which means that there are few logs for most of the queries. The lack of reliable user feedback information results in worse performance of long-tail queries compared with frequent queries. To solve the above problem, we propose a novel method that leverages an auxiliary module to enhance the representations of long-tail queries by taking advantage of reliable supervised information of variant frequent queries. The long-tail queries are guided by the contrastive loss to obtain category-aligned representations in the auxiliary module, where the variant frequent queries serve as anchors in the representation space. We train our model with real-world click data from AliExpress and conduct evaluation on both offline labeled data and online AB test. The results and further analysis demonstrate the effectiveness of our proposed method.
null
null
10.18653/v1/2022.ecnlp-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,092
inproceedings
howell-etal-2022-domain
Domain-specific knowledge distillation yields smaller and better models for conversational commerce
Malmasi, Shervin and Rokhlenko, Oleg and Ueffing, Nicola and Guy, Ido and Agichtein, Eugene and Kallumadi, Surya
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ecnlp-1.18/
Howell, Kristen and Wang, Jian and Hazare, Akshay and Bradley, Joseph and Brew, Chris and Chen, Xi and Dunn, Matthew and Hockey, Beth and Maurer, Andrew and Widdows, Dominic
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
151--160
We demonstrate that knowledge distillation can be used not only to reduce model size, but to simultaneously adapt a contextual language model to a specific domain. We use Multilingual BERT (mBERT; Devlin et al., 2019) as a starting point and follow the knowledge distillation approach of (Sahn et al., 2019) to train a smaller multilingual BERT model that is adapted to the domain at hand. We show that for in-domain tasks, the domain-specific model shows on average 2.3{\%} improvement in F1 score, relative to a model distilled on domain-general data. Whereas much previous work with BERT has fine-tuned the encoder weights during task training, we show that the model improvements from distillation on in-domain data persist even when the encoder weights are frozen during task training, allowing a single encoder to support classifiers for multiple tasks and languages.
null
null
10.18653/v1/2022.ecnlp-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
28,093